Motion Analysis System for Instruction of Nihon Buyo using Motion Capture
NASA Astrophysics Data System (ADS)
Shinoda, Yukitaka; Murakami, Shingo; Watanabe, Yuta; Mito, Yuki; Watanuma, Reishi; Marumo, Mieko
The passing on and preserving of advanced technical skills has become an important issue in a variety of fields, and motion analysis using motion capture has recently become popular in the research of advanced physical skills. This research aims to construct a system having a high on-site instructional effect on dancers learning Nihon Buyo, a traditional dance in Japan, and to classify Nihon Buyo dancing according to style, school, and dancer's proficiency by motion analysis. We have been able to study motion analysis systems for teaching Nihon Buyo now that body-motion data can be digitized and stored by motion capture systems using high-performance computers. Thus, with the aim of developing a user-friendly instruction-support system, we have constructed a motion analysis system that displays a dancer's time series of body motions and center of gravity for instructional purposes. In this paper, we outline this instructional motion analysis system based on three-dimensional position data obtained by motion capture. We also describe motion analysis that we performed based on center-of-gravity data obtained by this system and motion analysis focusing on school and age group using this system.
Samba: a real-time motion capture system using wireless camera sensor networks.
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-03-20
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.
Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-01-01
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Validation of enhanced kinect sensor based motion capturing for gait assessment
Müller, Björn; Ilg, Winfried; Giese, Martin A.
2017-01-01
Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413
Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing
NASA Astrophysics Data System (ADS)
Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
Robust object tracking techniques for vision-based 3D motion analysis applications
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.
2016-04-01
Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.
Real-time marker-free motion capture system using blob feature analysis
NASA Astrophysics Data System (ADS)
Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho
2005-02-01
This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.
Accuracy of human motion capture systems for sport applications; state-of-the-art review.
van der Kruk, Eline; Reijne, Marco M
2018-05-09
Sport research often requires human motion capture of an athlete. It can, however, be labour-intensive and difficult to select the right system, while manufacturers report on specifications which are determined in set-ups that largely differ from sport research in terms of volume, environment and motion. The aim of this review is to assist researchers in the selection of a suitable motion capture system for their experimental set-up for sport applications. An open online platform is initiated, to support (sport)researchers in the selection of a system and to enable them to contribute and update the overview. systematic review; Method: Electronic searches in Scopus, Web of Science and Google Scholar were performed, and the reference lists of the screened articles were scrutinised to determine human motion capture systems used in academically published studies on sport analysis. An overview of 17 human motion capture systems is provided, reporting the general specifications given by the manufacturer (weight and size of the sensors, maximum capture volume, environmental feasibilities), and calibration specifications as determined in peer-reviewed studies. The accuracy of each system is plotted against the measurement range. The overview and chart can assist researchers in the selection of a suitable measurement system. To increase the robustness of the database and to keep up with technological developments, we encourage researchers to perform an accuracy test prior to their experiment and to add to the chart and the system overview (online, open access).
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
Low-cost human motion capture system for postural analysis onboard ships
NASA Astrophysics Data System (ADS)
Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore
2011-07-01
The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.
Harbert, Simeon D; Jaiswal, Tushar; Harley, Linda R; Vaughn, Tyler W; Baranak, Andrew S
2013-01-01
The low cost, simple, robust, mobile, and easy to use Mobile Motion Capture (MiMiC) system is presented and the constraints which guided the design of MiMiC are discussed. The MiMiC Android application allows motion data to be captured from kinematic modules such as Shimmer 2r sensors over Bluetooth. MiMiC is cost effective and can be used for an entire day in a person's daily routine without being intrusive. MiMiC is a flexible motion capture system which can be used for many applications including fall detection, detection of fatigue in industry workers, and analysis of individuals' work patterns in various environments.
Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.
Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho
2016-07-01
We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.
An error-based micro-sensor capture system for real-time motion estimation
NASA Astrophysics Data System (ADS)
Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li
2017-10-01
A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).
AMUC: Associated Motion capture User Categories.
Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G
2009-07-13
The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.
NASA Technical Reports Server (NTRS)
Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob
2001-01-01
To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.
A video-based system for hand-driven stop-motion animation.
Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue
2013-01-01
Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.
Example-based human motion denoising.
Lou, Hui; Chai, Jinxiang
2010-01-01
With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.
Mauntel, Timothy C; Padua, Darin A; Stanley, Laura E; Frank, Barnett S; DiStefano, Lindsay J; Peck, Karen Y; Cameron, Kenneth L; Marshall, Stephen W
2017-11-01
The Landing Error Scoring System (LESS) can be used to identify individuals with an elevated risk of lower extremity injury. The limitation of the LESS is that raters identify movement errors from video replay, which is time-consuming and, therefore, may limit its use by clinicians. A markerless motion-capture system may be capable of automating LESS scoring, thereby removing this obstacle. To determine the reliability of an automated markerless motion-capture system for scoring the LESS. Cross-sectional study. United States Military Academy. A total of 57 healthy, physically active individuals (47 men, 10 women; age = 18.6 ± 0.6 years, height = 174.5 ± 6.7 cm, mass = 75.9 ± 9.2 kg). Participants completed 3 jump-landing trials that were recorded by standard video cameras and a depth camera. Their movement quality was evaluated by expert LESS raters (standard video recording) using the LESS rubric and by software that automates LESS scoring (depth-camera data). We recorded an error for a LESS item if it was present on at least 2 of 3 jump-landing trials. We calculated κ statistics, prevalence- and bias-adjusted κ (PABAK) statistics, and percentage agreement for each LESS item. Interrater reliability was evaluated between the 2 expert rater scores and between a consensus expert score and the markerless motion-capture system score. We observed reliability between the 2 expert LESS raters (average κ = 0.45 ± 0.35, average PABAK = 0.67 ± 0.34; percentage agreement = 0.83 ± 0.17). The markerless motion-capture system had similar reliability with consensus expert scores (average κ = 0.48 ± 0.40, average PABAK = 0.71 ± 0.27; percentage agreement = 0.85 ± 0.14). However, reliability was poor for 5 LESS items in both LESS score comparisons. A markerless motion-capture system had the same level of reliability as expert LESS raters, suggesting that an automated system can accurately assess movement. Therefore, clinicians can use the markerless motion-capture system to reliably score the LESS without being limited by the time requirements of manual LESS scoring.
Gritsenko, Valeriya; Dailey, Eric; Kyle, Nicholas; Taylor, Matt; Whittacre, Sean; Swisher, Anne K
2015-01-01
To determine if a low-cost, automated motion analysis system using Microsoft Kinect could accurately measure shoulder motion and detect motion impairments in women following breast cancer surgery. Descriptive study of motion measured via 2 methods. Academic cancer center oncology clinic. 20 women (mean age = 60 yrs) were assessed for active and passive shoulder motions during a routine post-operative clinic visit (mean = 18 days after surgery) following mastectomy (n = 4) or lumpectomy (n = 16) for breast cancer. Participants performed 3 repetitions of active and passive shoulder motions on the side of the breast surgery. Arm motion was recorded using motion capture by Kinect for Windows sensor and on video. Goniometric values were determined from video recordings, while motion capture data were transformed to joint angles using 2 methods (body angle and projection angle). Correlation of motion capture with goniometry and detection of motion limitation. Active shoulder motion measured with low-cost motion capture agreed well with goniometry (r = 0.70-0.80), while passive shoulder motion measurements did not correlate well. Using motion capture, it was possible to reliably identify participants whose range of shoulder motion was reduced by 40% or more. Low-cost, automated motion analysis may be acceptable to screen for moderate to severe motion impairments in active shoulder motion. Automatic detection of motion limitation may allow quick screening to be performed in an oncologist's office and trigger timely referrals for rehabilitation.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.
Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen
2017-11-23
Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.
Massaroni, Carlo; Cassetta, Eugenio; Silvestri, Sergio
2017-10-01
Respiratory assessment can be carried out by using motion capture systems. A geometrical model is mandatory in order to compute the breathing volume as a function of time from the markers' trajectories. This study describes a novel model to compute volume changes and calculate respiratory parameters by using a motion capture system. The novel method, ie, prism-based method, computes the volume enclosed within the chest by defining 82 prisms from the 89 markers attached to the subject chest. Volumes computed with this method are compared to spirometry volumes and to volumes computed by a conventional method based on the tetrahedron's decomposition of the chest wall and integrated in a commercial motion capture system. Eight healthy volunteers were enrolled and 30 seconds of quiet breathing data collected from each of them. Results show a better agreement between volumes computed by the prism-based method and the spirometry (discrepancy of 2.23%, R 2 = .94) compared to the agreement between volumes computed by the conventional method and the spirometry (discrepancy of 3.56%, R 2 = .92). The proposed method also showed better performances in the calculation of respiratory parameters. Our findings open up prospects for the further use of the new method in the breathing assessment via motion capture systems.
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Miniature low-power inertial sensors: promising technology for implantable motion capture systems.
Lambrecht, Joris M; Kirsch, Robert F
2014-11-01
Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.
A Virtual Reality Dance Training System Using Motion Capture Technology
ERIC Educational Resources Information Center
Chan, J. C. P.; Leung, H.; Tang, J. K. T.; Komura, T.
2011-01-01
In this paper, a new dance training system based on the motion capture and virtual reality (VR) technologies is proposed. Our system is inspired by the traditional way to learn new movements-imitating the teacher's movements and listening to the teacher's feedback. A prototype of our proposed system is implemented, in which a student can imitate…
Development of real-time motion capture system for 3D on-line games linked with virtual character
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck
2004-10-01
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
Biomechanical analysis using Kinovea for sports application
NASA Astrophysics Data System (ADS)
Muaza Nor Adnan, Nor; Patar, Mohd Nor Azmi Ab; Lee, Hokyoo; Yamamoto, Shin-Ichiroh; Jong-Young, Lee; Mahmud, Jamaluddin
2018-04-01
This paper assesses the reliability of HD VideoCam–Kinovea as an alternative tool in conducting motion analysis and measuring knee relative angle of drop jump movement. The motion capture and analysis procedure were conducted in the Biomechanics Lab, Shibaura Institute of Technology, Omiya Campus, Japan. A healthy subject without any gait disorder (BMI of 28.60 ± 1.40) was recruited. The volunteered subject was asked to per the drop jump movement on preset platform and the motion was simultaneously recorded using an established infrared motion capture system (Hawk–Cortex) and a HD VideoCam in the sagittal plane only. The capture was repeated for 5 times. The outputs (video recordings) from the HD VideoCam were input into Kinovea (an open-source software) and the drop jump pattern was tracked and analysed. These data are compared with the drop jump pattern tracked and analysed earlier using the Hawk–Cortex system. In general, the results obtained (drop jump pattern) using the HD VideoCam–Kinovea are close to the results obtained using the established motion capture system. Basic statistical analyses show that most average variances are less than 10%, thus proving the repeatability of the protocol and the reliability of the results. It can be concluded that the integration of HD VideoCam–Kinovea has the potential to become a reliable motion capture–analysis system. Moreover, it is low cost, portable and easy to use. As a conclusion, the current study and its findings are found useful and has contributed to enhance significant knowledge pertaining to motion capture-analysis, drop jump movement and HD VideoCam–Kinovea integration.
Zhang, Ao; Yan, Xing-Ke; Liu, An-Guo
2016-12-25
In the present paper, the authors introduce a newly-developed "Acupuncture Needle Manipulation Training-evaluation System" based on optical motion capture technique. It is composed of two parts, sensor and software, and overcomes some shortages of mechanical motion capture technique. This device is able to analyze the data of operations of the pressing-hand and needle-insertion hand during acupuncture performance and its software contains personal computer (PC) version, Android version, and Internetwork Operating System (IOS) Apple version. It is competent in recording and analyzing information of any ope-rator's needling manipulations, and is quite helpful for teachers in teaching, training and examining students in clinical practice.
A Study of Vicon System Positioning Performance.
Merriaux, Pierre; Dupuis, Yohan; Boutteau, Rémi; Vasseur, Pascal; Savatier, Xavier
2017-07-07
Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today's life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
Full-motion video analysis for improved gender classification
NASA Astrophysics Data System (ADS)
Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.
2014-06-01
The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge
2014-12-01
Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.
The adaptation of GDL motion recognition system to sport and rehabilitation techniques analysis.
Hachaj, Tomasz; Ogiela, Marek R
2016-06-01
The main novelty of this paper is presenting the adaptation of Gesture Description Language (GDL) methodology to sport and rehabilitation data analysis and classification. In this paper we showed that Lua language can be successfully used for adaptation of the GDL classifier to those tasks. The newly applied scripting language allows easily extension and integration of classifier with other software technologies and applications. The obtained execution speed allows using the methodology in the real-time motion capture data processing where capturing frequency differs from 100 Hz to even 500 Hz depending on number of features or classes to be calculated and recognized. Due to this fact the proposed methodology can be used to the high-end motion capture system. We anticipate that using novel, efficient and effective method will highly help both sport trainers and physiotherapist in they practice. The proposed approach can be directly applied to motion capture data kinematics analysis (evaluation of motion without regard to the forces that cause that motion). The ability to apply pattern recognition methods for GDL description can be utilized in virtual reality environment and used for sport training or rehabilitation treatment.
Concurrent validation of Xsens MVN measurement of lower limb joint angular kinematics.
Zhang, Jun-Tian; Novak, Alison C; Brouwer, Brenda; Li, Qingguo
2013-08-01
This study aims to validate a commercially available inertial sensor based motion capture system, Xsens MVN BIOMECH using its native protocols, against a camera-based motion capture system for the measurement of joint angular kinematics. Performance was evaluated by comparing waveform similarity using range of motion, mean error and a new formulation of the coefficient of multiple correlation (CMC). Three dimensional joint angles of the lower limbs were determined for ten healthy subjects while they performed three daily activities: level walking, stair ascent, and stair descent. Under all three walking conditions, the Xsens system most accurately determined the flexion/extension joint angle (CMC > 0.96) for all joints. The joint angle measurements associated with the other two joint axes had lower correlation including complex CMC values. The poor correlation in the other two joint axes is most likely due to differences in the anatomical frame definition of limb segments used by the Xsens and Optotrak systems. Implementation of a protocol to align these two systems is necessary when comparing joint angle waveforms measured by the Xsens and other motion capture systems.
Local Dynamic Stability Assessment of Motion Impaired Elderly Using Electronic Textile Pants.
Liu, Jian; Lockhart, Thurmon E; Jones, Mark; Martin, Tom
2008-10-01
A clear association has been demonstrated between gait stability and falls in the elderly. Integration of wearable computing and human dynamic stability measures into home automation systems may help differentiate fall-prone individuals in a residential environment. The objective of the current study was to evaluate the capability of a pair of electronic textile (e-textile) pants system to assess local dynamic stability and to differentiate motion-impaired elderly from their healthy counterparts. A pair of e-textile pants comprised of numerous e-TAGs at locations corresponding to lower extremity joints was developed to collect acceleration, angular velocity and piezoelectric data. Four motion-impaired elderly together with nine healthy individuals (both young and old) participated in treadmill walking with a motion capture system simultaneously collecting kinematic data. Local dynamic stability, characterized by maximum Lyapunov exponent, was computed based on vertical acceleration and angular velocity at lower extremity joints for the measurements from both e-textile and motion capture systems. Results indicated that the motion-impaired elderly had significantly higher maximum Lyapunov exponents (computed from vertical acceleration data) than healthy individuals at the right ankle and hip joints. In addition, maximum Lyapunov exponents assessed by the motion capture system were found to be significantly higher than those assessed by the e-textile system. Despite the difference between these measurement techniques, attaching accelerometers at the ankle and hip joints was shown to be an effective sensor configuration. It was concluded that the e-textile pants system, via dynamic stability assessment, has the potential to identify motion-impaired elderly.
Inertial motion capture system for biomechanical analysis in pressure suits
NASA Astrophysics Data System (ADS)
Di Capua, Massimiliano
A non-invasive system has been developed at the University of Maryland Space System Laboratory with the goal of providing a new capability for quantifying the motion of the human inside a space suit. Based on an array of six microprocessors and eighteen microelectromechanical (MEMS) inertial measurement units (IMUs), the Body Pose Measurement System (BPMS) allows the monitoring of the kinematics of the suit occupant in an unobtrusive, self-contained, lightweight and compact fashion, without requiring any external equipment such as those necessary with modern optical motion capture systems. BPMS measures and stores the accelerations, angular rates and magnetic fields acting upon each IMU, which are mounted on the head, torso, and each segment of each limb. In order to convert the raw data into a more useful form, such as a set of body segment angles quantifying pose and motion, a series of geometrical models and a non-linear complimentary filter were implemented. The first portion of this works focuses on assessing system performance, which was measured by comparing the BPMS filtered data against rigid body angles measured through an external VICON optical motion capture system. This type of system is the industry standard, and is used here for independent measurement of body pose angles. By comparing the two sets of data, performance metrics such as BPMS system operational conditions, accuracy, and drift were evaluated and correlated against VICON data. After the system and models were verified and their capabilities and limitations assessed, a series of pressure suit evaluations were conducted. Three different pressure suits were used to identify the relationship between usable range of motion and internal suit pressure. In addition to addressing range of motion, a series of exploration tasks were also performed, recorded, and analysed in order to identify different motion patterns and trajectories as suit pressure is increased and overall suit mobility is reduced. The focus of these evaluations was to quantify the reduction in mobility when operating in any of the evaluated pressure suits. This data should be of value in defining new low cost alternatives for pressure suit performance verification and evaluation. This work demonstrates that the BPMS technology is a viable alternative or companion to optical motion capture; while BPMS is the first motion capture system that has been designed specifically to measure the kinematics of a human in a pressure suit, its capabilities are not constrained to just being a measurement tool. The last section of the manuscript is devoted to future possible uses for the system, with a specific focus on pressure suit applications such in the use of BPMS as a master control interface for robot teleoperation, as well as an input interface for future robotically augmented pressure suits.
NASA Astrophysics Data System (ADS)
Tinoco, Hector A.; Ovalle, Alex M.; Vargas, Carlos A.; Cardona, María J.
2015-09-01
In the context of industrial engineering, the predetermined time systems (PTS) play an important role in workplaces because inefficiencies are found in assembly processes that require manual manipulations. In this study, an approach is proposed with the aim to analyze time and motions in a manual process using a capture motion system embedded to a virtual environment. Capture motion system tracks IR passive markers located on the hands to take the positions of each one. For our purpose, a real workplace is virtually represented by domains to create a virtual workplace based on basic geometries. Motion captured data are combined with the virtual workplace to simulate operations carried out on it, and a time and motion analysis is completed by means of an algorithm. To test the methodology of analysis, a case study was intentionally designed using and violating the principles of motion economy. In the results, it was possible to observe where the hands never crossed as well as where the hands passed by the same place. In addition, the activities done in each zone were observed and some known deficiencies were identified in the distribution of the workplace by computational analysis. Using a frequency analysis of hand velocities, errors in the chosen assembly method were revealed showing differences in the hand velocities. An opportunity is seen to classify some quantifiable aspects that are not identified easily in a traditional time and motion analysis. The automated analysis is considered as the main contribution in this study. In the industrial context, a great application is perceived in terms of monitoring the workplace to analyze repeatability, PTS, workplace and labor activities redistribution using the proposed methodology.
A novel validation and calibration method for motion capture systems based on micro-triangulation.
Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M
2018-06-06
Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Quantitative analysis of arm movement smoothness
NASA Astrophysics Data System (ADS)
Szczesna, Agnieszka; Błaszczyszyn, Monika
2017-07-01
The paper deals with the problem of motion data quantitative smoothness analysis. We investigated values of movement unit, fluidity and jerk for healthy and paralyzed arm of patients with hemiparesis after stroke. Patients were performing drinking task. To validate the approach, movement of 24 patients were captured using optical motion capture system.
ERIC Educational Resources Information Center
Brunkan, Melissa C.
2016-01-01
The purpose of this study was to validate previous research that suggests using movement in conjunction with singing tasks can affect intonation and perception of the task. Singers (N = 49) were video and audio recorded, using a motion capture system, while singing a phrase from a familiar song, first with no motion, and then while doing a low,…
Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio
2014-01-09
Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.
Reference equations of motion for automatic rendezvous and capture
NASA Technical Reports Server (NTRS)
Henderson, David M.
1992-01-01
The analysis presented in this paper defines the reference coordinate frames, equations of motion, and control parameters necessary to model the relative motion and attitude of spacecraft in close proximity with another space system during the Automatic Rendezvous and Capture phase of an on-orbit operation. The relative docking port target position vector and the attitude control matrix are defined based upon an arbitrary spacecraft design. These translation and rotation control parameters could be used to drive the error signal input to the vehicle flight control system. Measurements for these control parameters would become the bases for an autopilot or feedback control system (FCS) design for a specific spacecraft.
Method for measuring tri-axial lumbar motion angles using wearable sheet stretch sensors
Nakamoto, Hiroyuki; Yamaji, Tokiya; Ootaka, Hideo; Bessho, Yusuke; Nakamura, Ryo; Ono, Rei
2017-01-01
Background Body movements, such as trunk flexion and rotation, are risk factors for low back pain in occupational settings, especially in healthcare workers. Wearable motion capture systems are potentially useful to monitor lower back movement in healthcare workers to help avoid the risk factors. In this study, we propose a novel system using sheet stretch sensors and investigate the system validity for estimating lower back movement. Methods Six volunteers (female:male = 1:1, mean age: 24.8 ± 4.0 years, height 166.7 ± 5.6 cm, weight 56.3 ± 7.6 kg) participated in test protocols that involved executing seven types of movements. The movements were three uniaxial trunk movements (i.e., trunk flexion-extension, trunk side-bending, and trunk rotation) and four multiaxial trunk movements (i.e., flexion + rotation, flexion + side-bending, side-bending + rotation, and moving around the cranial–caudal axis). Each trial lasted for approximately 30 s. Four stretch sensors were attached to each participant’s lower back. The lumbar motion angles were estimated using simple linear regression analysis based on the stretch sensor outputs and compared with those obtained by the optical motion capture system. Results The estimated lumbar motion angles showed a good correlation with the actual angles, with correlation values of r = 0.68 (SD = 0.35), r = 0.60 (SD = 0.19), and r = 0.72 (SD = 0.18) for the flexion-extension, side bending, and rotation movements, respectively (all P < 0.05). The estimation errors in all three directions were less than 3°. Conclusion The stretch sensors mounted on the back provided reasonable estimates of the lumbar motion angles. The novel motion capture system provided three directional angles without capture space limits. The wearable system possessed great potential to monitor the lower back movement in healthcare workers and helping prevent low back pain. PMID:29020053
Applied research of embedded WiFi technology in the motion capture system
NASA Astrophysics Data System (ADS)
Gui, Haixia
2012-04-01
Embedded wireless WiFi technology is one of the current wireless hot spots in network applications. This paper firstly introduces the definition and characteristics of WiFi. With the advantages of WiFi such as using no wiring, simple operation and stable transmission, this paper then gives a system design for the application of embedded wireless WiFi technology in the motion capture system. Also, it verifies the effectiveness of design in the WiFi-based wireless sensor hardware and software program.
Motion visualization and estimation for flapping wing systems
NASA Astrophysics Data System (ADS)
Hsu, Tzu-Sheng Shane; Fitzgerald, Timothy; Nguyen, Vincent Phuc; Patel, Trisha; Balachandran, Balakumar
2017-04-01
Studies of fluid-structure interactions associated with flexible structures such as flapping wings require the capture and quantification of large motions of bodies that may be opaque. As a case study, motion capture of a free flying Manduca sexta, also known as hawkmoth, is considered by using three synchronized high-speed cameras. A solid finite element (FE) representation is used as a reference body and successive snapshots in time of the displacement fields are reconstructed via an optimization procedure. One of the original aspects of this work is the formulation of an objective function and the use of shadow matching and strain-energy regularization. With this objective function, the authors penalize the projection differences between silhouettes of the captured images and the FE representation of the deformed body. The process and procedures undertaken to go from high-speed videography to motion estimation are discussed, and snapshots of representative results are presented. Finally, the captured free-flight motion is also characterized and quantified.
NASA Technical Reports Server (NTRS)
Lee, Mun Wai
2015-01-01
Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.
2012-03-19
PETER MA, EV74, WEARS A SUIT COVERED WITH SPHERICAL REFLECTORS THAT ENABLE HIS MOTIONS TO BE TRACKED BY THE MOTION CAPTURE SYSTEM. THE HUMAN MODEL IN RED ON THE SCREEN IN THE BACKGROUND REPRESENTS THE SYSTEM-GENERATED IMAGE OF PETER'S POSITION.
Song, Young Seop; Yang, Kyung Yong; Youn, Kibum; Yoon, Chiyul; Yeom, Jiwoon; Hwang, Hyeoncheol; Lee, Jehee; Kim, Keewon
2016-08-01
To compare optical motion capture system (MoCap), attitude and heading reference system (AHRS) sensor, and Microsoft Kinect for the continuous measurement of cervical range of motion (ROM). Fifteen healthy adult subjects were asked to sit in front of the Kinect camera with optical markers and AHRS sensors attached to the body in a room equipped with optical motion capture camera. Subjects were instructed to independently perform axial rotation followed by flexion/extension and lateral bending. Each movement was repeated 5 times while being measured simultaneously with 3 devices. Using the MoCap system as the gold standard, the validity of AHRS and Kinect for measurement of cervical ROM was assessed by calculating correlation coefficient and Bland-Altman plot with 95% limits of agreement (LoA). MoCap and ARHS showed fair agreement (95% LoA<10°), while MoCap and Kinect showed less favorable agreement (95% LoA>10°) for measuring ROM in all directions. Intraclass correlation coefficient (ICC) values between MoCap and AHRS in -40° to 40° range were excellent for flexion/extension and lateral bending (ICC>0.9). ICC values were also fair for axial rotation (ICC>0.8). ICC values between MoCap and Kinect system in -40° to 40° range were fair for all motions. Our study showed feasibility of using AHRS to measure cervical ROM during continuous motion with an acceptable range of error. AHRS and Kinect system can also be used for continuous monitoring of flexion/extension and lateral bending in ordinary range.
Inertial Motion Capture Costume Design Study
Szczęsna, Agnieszka; Skurowski, Przemysław; Lach, Ewa; Pruszowski, Przemysław; Pęszor, Damian; Paszkuta, Marcin; Słupik, Janusz; Lebek, Kamil; Janiak, Mateusz; Polański, Andrzej; Wojciechowski, Konrad
2017-01-01
The paper describes a scalable, wearable multi-sensor system for motion capture based on inertial measurement units (IMUs). Such a unit is composed of accelerometer, gyroscope and magnetometer. The final quality of an obtained motion arises from all the individual parts of the described system. The proposed system is a sequence of the following stages: sensor data acquisition, sensor orientation estimation, system calibration, pose estimation and data visualisation. The construction of the system’s architecture with the dataflow programming paradigm makes it easy to add, remove and replace the data processing steps. The modular architecture of the system allows an effortless introduction of a new sensor orientation estimation algorithms. The original contribution of the paper is the design study of the individual components used in the motion capture system. The two key steps of the system design are explored in this paper: the evaluation of sensors and algorithms for the orientation estimation. The three chosen algorithms have been implemented and investigated as part of the experiment. Due to the fact that the selection of the sensor has a significant impact on the final result, the sensor evaluation process is also explained and tested. The experimental results confirmed that the choice of sensor and orientation estimation algorithm affect the quality of the final results. PMID:28304337
Tung, James Y; Lulic, Tea; Gonzalez, Dave A; Tran, Johnathan; Dickerson, Clark R; Roy, Eric A
2015-05-01
Although motion analysis is frequently employed in upper limb motor assessment (e.g. visually-guided reaching), they are resource-intensive and limited to laboratory settings. This study evaluated the reliability and accuracy of a new markerless motion capture device, the Leap Motion controller, to measure finger position. Testing conditions that influence reliability and agreement between the Leap and a research-grade motion capture system were examined. Nine healthy young adults pointed to 15 targets on a computer screen under two conditions: (1) touching the target (touch) and (2) 4 cm away from the target (no-touch). Leap data was compared to an Optotrak marker attached to the index finger. Across all trials, root mean square (RMS) error of the Leap system was 17.30 ± 9.56 mm (mean ± SD), sampled at 65.47 ± 21.53 Hz. The % viable trials and mean sampling rate were significantly lower in the touch condition (44% versus 64%, p < 0.001; 52.02 ± 2.93 versus 73.98 ± 4.48 Hz, p = 0.003). While linear correlations were high (horizontal: r(2) = 0.995, vertical r(2) = 0.945), the limits of agreement were large (horizontal: -22.02 to +26.80 mm, vertical: -29.41 to +30.14 mm). While not as precise as more sophisticated optical motion capture systems, the Leap Motion controller is sufficiently reliable for measuring motor performance in pointing tasks that do not require high positional accuracy (e.g. reaction time, Fitt's, trails, bimanual coordination).
Biomechanical Evaluation of an Electric Power-Assisted Bicycle by a Musculoskeletal Model
NASA Astrophysics Data System (ADS)
Takehara, Shoichiro; Murakami, Musashi; Hase, Kazunori
In this study, we construct an evaluation system for the muscular activity of the lower limbs when a human pedals an electric power-assisted bicycle. The evaluation system is composed of an electric power-assisted bicycle, a numerical simulator and a motion capture system. The electric power-assisted bicycle in this study has a pedal with an attached force sensor. The numerical simulator for pedaling motion is a musculoskeletal model of a human. The motion capture system measures the joint angles of the lower limb. We examine the influence of the electric power-assisted force on each muscle of the human trunk and legs. First, an experiment of pedaling motion is performed. Then, the musculoskeletal model is calculated by using the experimental data. We discuss the influence on each muscle by electric power-assist. It is found that the muscular activity is decreased by the electric power-assist bicycle, and the reduction of the muscular force required for pedaling motion was quantitatively shown for every muscle.
Restoration of motion blurred images
NASA Astrophysics Data System (ADS)
Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.
2017-08-01
Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.
Integration of time as a factor in ergonomic simulation.
Walther, Mario; Muñoz, Begoña Toledo
2012-01-01
The paper describes the application of a simulation based ergonomic evaluation. Within a pilot project, the algorithms of the screening method of the European Assembly Worksheet were transferred into an existing digital human model. Movement data was recorded with an especially developed hybrid Motion Capturing system. A prototype of the system was built and is currently being tested at the Volkswagen Group. First results showed the feasibility of the simulation based ergonomic evaluation with Motion Capturing.
Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications
Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas
2013-01-01
Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933
Video repairing under variable illumination using cyclic motions.
Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung
2006-05-01
This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.
Health Problems Discovery from Motion-Capture Data of Elderly
NASA Astrophysics Data System (ADS)
Pogorelc, B.; Gams, M.
Rapid aging of the population of the developed countries could exceed the society's capacity for taking care for them. In order to help solving this problem, we propose a system for automatic discovery of health problems from motion-capture data of gait of elderly. The gait of the user is captured with the motion capture system, which consists of tags attached to the body and sensors situated in the apartment. Position of the tags is acquired by the sensors and the resulting time series of position coordinates are analyzed with machine learning algorithms in order to identify the specific health problem. We propose novel features for training a machine learning classifier that classifies the user's gait into: i) normal, ii) with hemiplegia, iii) with Parkinson's disease, iv) with pain in the back and v) with pain in the leg. Results show that naive Bayes needs more tags and less noise to reach classification accuracy of 98 % than support vector machines for 99 %.
System and Method for Measuring Skin Movement and Strain and Related Techniques
NASA Technical Reports Server (NTRS)
Newman, Dava J. (Inventor); Wessendorf, Ashley M. (Inventor)
2015-01-01
Described herein are systems and techniques for a motion capture system and a three-dimensional (3D) tracking system used to record body position and/or movements/motions and using the data to measure skin strain (a strain field) all along the body while a joint is in motion (dynamic) as well as in a fixed position (static). The data and technique can be used to quantify strains, calculate 3D contours, and derive patterns believed to reveal skin's properties during natural motions.
LTBP Program's Literature Review on Weigh-in-Motion System
DOT National Transportation Integrated Search
2016-06-01
Truck size and weight are regulated using Federal and State legislation and policies to ensure safety and preserve bridge and high infrastructure. Weigh-in-motion (WIM) systems can capture the weight and other defining characteristics of the vehicles...
Szczęsna, Agnieszka; Pruszowski, Przemysław
2016-01-01
Inertial orientation tracking is still an area of active research, especially in the context of out-door, real-time, human motion capture. Existing systems either propose loosely coupled tracking approaches where each segment is considered independently, taking the resulting drawbacks into account, or tightly coupled solutions that are limited to a fixed chain with few segments. Such solutions have no flexibility to change the skeleton structure, are dedicated to a specific set of joints, and have high computational complexity. This paper describes the proposal of a new model-based extended quaternion Kalman filter that allows for estimation of orientation based on outputs from the inertial measurements unit sensors. The filter considers interdependencies resulting from the construction of the kinematic chain so that the orientation estimation is more accurate. The proposed solution is a universal filter that does not predetermine the degree of freedom at the connections between segments of the model. To validation the motion of 3-segments single link pendulum captured by optical motion capture system is used. The next step in the research will be to use this method for inertial motion capture with a human skeleton model.
Evaluation of a Gait Assessment Module Using 3D Motion Capture Technology
Baskwill, Amanda J.; Belli, Patricia; Kelleher, Leila
2017-01-01
Background Gait analysis is the study of human locomotion. In massage therapy, this observation is part of an assessment process that informs treatment planning. Massage therapy students must apply the theory of gait assessment to simulated patients. At Humber College, the gait assessment module traditionally consists of a textbook reading and a three-hour, in-class session in which students perform gait assessment on each other. In 2015, Humber College acquired a three-dimensional motion capture system. Purpose The purpose was to evaluate the use of 3D motion capture in a gait assessment module compared to the traditional gait assessment module. Participants Semester 2 massage therapy students who were enrolled in Massage Theory 2 (n = 38). Research Design Quasi-experimental, wait-list comparison study. Intervention The intervention group participated in an in-class session with a Qualisys motion capture system. Main Outcome Measure(s) The outcomes included knowledge and application of gait assessment theory as measured by quizzes, and students’ satisfaction as measured through a questionnaire. Results There were no statistically significant differences in baseline and post-module knowledge between both groups (pre-module: p = .46; post-module: p = .63). There was also no difference between groups on the final application question (p = .13). The intervention group enjoyed the in-class session because they could visualize the content, whereas the comparison group enjoyed the interactivity of the session. The intervention group recommended adding the assessment of gait on their classmates to their experience. Both groups noted more time was needed for the gait assessment module. Conclusions Based on the results of this study, it is recommended that the gait assessment module combine both the traditional in-class session and the 3D motion capture system. PMID:28293329
Computational cameras for moving iris recognition
NASA Astrophysics Data System (ADS)
McCloskey, Scott; Venkatesha, Sharath
2015-05-01
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
Vakanski, A; Ferguson, JM; Lee, S
2016-01-01
Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
MPCV Exercise Operational Volume Analysis
NASA Technical Reports Server (NTRS)
Godfrey, A.; Humphreys, B.; Funk, J.; Perusek, G.; Lewandowski, B. E.
2017-01-01
In order to minimize the loss of bone and muscle mass during spaceflight, the Multi-purpose Crew Vehicle (MPCV) will include an exercise device and enough free space within the cabin for astronauts to use the device effectively. The NASA Digital Astronaut Project (DAP) has been tasked with using computational modeling to aid in determining whether or not the available operational volume is sufficient for in-flight exercise.Motion capture data was acquired using a 12-camera Smart DX system (BTS Bioengineering, Brooklyn, NY), while exercisers performed 9 resistive exercises without volume restrictions in a 1g environment. Data were collected from two male subjects, one being in the 99th percentile of height and the other in the 50th percentile of height, using between 25 and 60 motion capture markers. Motion capture data was also recorded as a third subject, also near the 50th percentile in height, performed aerobic rowing during a parabolic flight. A motion capture system and algorithms developed previously and presented at last years HRP-IWS were utilized to collect and process the data from the parabolic flight [1]. These motions were applied to a scaled version of a biomechanical model within the biomechanical modeling software OpenSim [2], and the volume sweeps of the motions were visually assessed against an imported CAD model of the operational volume. Further numerical analysis was performed using Matlab (Mathworks, Natick, MA) and the OpenSim API. This analysis determined the location of every marker in space over the duration of the exercise motion, and the distance of each marker to the nearest surface of the volume. Containment of the exercise motions within the operational volume was determined on a per-exercise and per-subject basis. The orientation of the exerciser and the angle of the footplate were two important factors upon which containment was dependent. Regions where the exercise motion exceeds the bounds of the operational volume have been identified by determining which markers from the motion capture exceed the operational volume and by how much. A credibility assessment of this analysis was performed in accordance with NASA-STD-7009 prior to delivery to the MPCV program.
NASA Technical Reports Server (NTRS)
Jackson, Mariea Dunn; Dischinger, Charles; Stambolian, Damon; Henderson, Gena
2012-01-01
Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a Primitive motion capture library. The Library will be used by the human factors engineering in the future to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the Primitive models are being developed for the library the project has selected several current human factors issues to be addressed for the SLS and Orion launch systems. This paper explains how the Motion Capture of unique ground systems activities are being used to verify the human factors analysis requirements for ground system used to process the STS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.
Postures and Motions Library Development for Verification of Ground Crew Human Factors Requirements
NASA Technical Reports Server (NTRS)
Stambolian, Damon; Henderson, Gena; Jackson, Mariea Dunn; Dischinger, Charles
2013-01-01
Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a primitive motion capture library. The library will be used by human factors engineering analysts to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the primitive models are being developed for the library, the project has selected several current human factors issues to be addressed for the Space Launch System (SLS) and Orion launch systems. This paper explains how the motion capture of unique ground systems activities is being used to verify the human factors engineering requirements for ground systems used to process the SLS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.
Capture by colour: evidence for dimension-specific singleton capture.
Harris, Anthony M; Becker, Stefanie I; Remington, Roger W
2015-10-01
Previous work on attentional capture has shown the attentional system to be quite flexible in the stimulus properties it can be set to respond to. Several different attentional "modes" have been identified. Feature search mode allows attention to be set for specific features of a target (e.g., red). Singleton detection mode sets attention to respond to any discrepant item ("singleton") in the display. Relational search sets attention for the relative properties of the target in relation to the distractors (e.g., redder, larger). Recently, a new attentional mode was proposed that sets attention to respond to any singleton within a particular feature dimension (e.g., colour; Folk & Anderson, 2010). We tested this proposal against the predictions of previously established attentional modes. In a spatial cueing paradigm, participants searched for a colour target that was randomly either red or green. The nature of the attentional control setting was probed by presenting an irrelevant singleton cue prior to the target display and assessing whether it attracted attention. In all experiments, the cues were red, green, blue, or a white stimulus rapidly rotated (motion cue). The results of three experiments support the existence of a "colour singleton set," finding that all colour cues captured attention strongly, while motion cues captured attention only weakly or not at all. Notably, we also found that capture by motion cues in search for colour targets was moderated by their frequency; rare motion cues captured attention (weakly), while frequent motion cues did not.
Biomechanics Analysis of Combat Sport (Silat) By Using Motion Capture System
NASA Astrophysics Data System (ADS)
Zulhilmi Kaharuddin, Muhammad; Badriah Khairu Razak, Siti; Ikram Kushairi, Muhammad; Syawal Abd. Rahman, Mohamed; An, Wee Chang; Ngali, Z.; Siswanto, W. A.; Salleh, S. M.; Yusup, E. M.
2017-01-01
‘Silat’ is a Malay traditional martial art that is practiced in both amateur and in professional levels. The intensity of the motion spurs the scientific research in biomechanics. The main purpose of this abstract is to present the biomechanics method used in the study of ‘silat’. By using the 3D Depth Camera motion capture system, two subjects are to perform ‘Jurus Satu’ in three repetitions each. One subject is set as the benchmark for the research. The videos are captured and its data is processed using the 3D Depth Camera server system in the form of 16 3D body joint coordinates which then will be transformed into displacement, velocity and acceleration components by using Microsoft excel for data calculation and Matlab software for simulation of the body. The translated data obtained serves as an input to differentiate both subjects’ execution of the ‘Jurus Satu’. Nine primary movements with the addition of five secondary movements are observed visually frame by frame from the simulation obtained to get the exact frame that the movement takes place. Further analysis involves the differentiation of both subjects’ execution by referring to the average mean and standard deviation of joints for each parameter stated. The findings provide useful data for joints kinematic parameters as well as to improve the execution of ‘Jurus Satu’ and to exhibit the process of learning a movement that is relatively unknown by the use of a motion capture system.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
Nearly automatic motion capture system for tracking octopus arm movements in 3D space.
Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar
2009-08-30
Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.
A low cost PSD-based monocular motion capture system
NASA Astrophysics Data System (ADS)
Ryu, Young Kee; Oh, Choonsuk
2007-10-01
This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.
NASA Astrophysics Data System (ADS)
Jebeli, Mahvash; Bilesan, Alireza; Arshi, Ahmadreza
2017-06-01
The currently available commercial motion capture systems are constrained by space requirement and thus pose difficulties when used in developing kinematic description of human movements within the existing manufacturing and production cells. The Kinect sensor does not share similar limitations but it is not as accurate. The proposition made in this article is to adopt the Kinect sensor in to facilitate implementation of Health Engineering concepts to industrial environments. This article is an evaluation of the Kinect sensor accuracy when providing three dimensional kinematic data. The sensor is thus utilized to assist in modeling and simulation of worker performance within an industrial cell. For this purpose, Kinect 3D data was compared to that of Vicon motion capture system in a gait analysis laboratory. Results indicated that the Kinect sensor exhibited a coefficient of determination of 0.9996 on the depth axis and 0.9849 along the horizontal axis and 0.2767 on vertical axis. The results prove the competency of the Kinect sensor to be used in the industrial environments.
Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.
Barre, Arnaud; Armand, Stéphane
2014-04-01
C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Validation of the Leap Motion Controller using markered motion capture technology.
Smeragliuolo, Anna H; Hill, N Jeremy; Disla, Luis; Putrino, David
2016-06-14
The Leap Motion Controller (LMC) is a low-cost, markerless motion capture device that tracks hand, wrist and forearm position. Integration of this technology into healthcare applications has begun to occur rapidly, making validation of the LMC׳s data output an important research goal. Here, we perform a detailed evaluation of the kinematic data output from the LMC, and validate this output against gold-standard, markered motion capture technology. We instructed subjects to perform three clinically-relevant wrist (flexion/extension, radial/ulnar deviation) and forearm (pronation/supination) movements. The movements were simultaneously tracked using both the LMC and a marker-based motion capture system from Motion Analysis Corporation (MAC). Adjusting for known inconsistencies in the LMC sampling frequency, we compared simultaneously acquired LMC and MAC data by performing Pearson׳s correlation (r) and root mean square error (RMSE). Wrist flexion/extension and radial/ulnar deviation showed good overall agreement (r=0.95; RMSE=11.6°, and r=0.92; RMSE=12.4°, respectively) with the MAC system. However, when tracking forearm pronation/supination, there were serious inconsistencies in reported joint angles (r=0.79; RMSE=38.4°). Hand posture significantly influenced the quality of wrist deviation (P<0.005) and forearm supination/pronation (P<0.001), but not wrist flexion/extension (P=0.29). We conclude that the LMC is capable of providing data that are clinically meaningful for wrist flexion/extension, and perhaps wrist deviation. It cannot yet return clinically meaningful data for measuring forearm pronation/supination. Future studies should continue to validate the LMC as updated versions of their software are developed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sato, Nahoko; Nunome, Hiroyuki; Ikegami, Yasuo
2015-02-01
In hip-hop dance contests, a procedure for evaluating performances has not been clearly defined, and objective criteria for evaluation are necessary. It is assumed that most hip-hop dance techniques have common motion characteristics by which judges determine the dancer's skill level. This study aimed to extract motion characteristics that may be linked to higher evaluations by judges. Ten expert and 12 nonexpert dancers performed basic rhythmic movements at a rate of 100 beats per minute. Their movements were captured using a motion capture system, and eight judges evaluated the performances. Four kinematic parameters, including the amplitude of the body motions and the phase delay, which indicates the phase difference between two joint angles, were calculated. The two groups showed no significant differences in terms of the amplitudes of the body motions. In contrast, the phase delay between the head motion and the other body parts' motions of expert dancers who received higher scores from the judges, which was approximately a quarter cycle, produced a loop-shaped motion of the head. It is suggested that this slight phase delay was related to the judges' evaluations and that these findings may help in constructing an objective evaluation system.
Method and System for Producing Full Motion Media to Display on a Spherical Surface
NASA Technical Reports Server (NTRS)
Starobin, Michael A. (Inventor)
2015-01-01
A method and system for producing full motion media for display on a spherical surface is described. The method may include selecting a subject of full motion media for display on a spherical surface. The method may then include capturing the selected subject as full motion media (e.g., full motion video) in a rectilinear domain. The method may then include processing the full motion media in the rectilinear domain for display on a spherical surface, such as by orienting the full motion media, adding rotation to the full motion media, processing edges of the full motion media, and/or distorting the full motion media in the rectilinear domain for instance. After processing the full motion media, the method may additionally include providing the processed full motion media to a spherical projection system, such as a Science on a Sphere system.
Miranda, Daniel L; Rainbow, Michael J; Crisco, Joseph J; Fleming, Braden C
2012-01-01
Jumping and cutting activities are investigated in many laboratories attempting to better understand the biomechanics associated with non-contact ACL injury. Optical motion capture is widely used; however, it is subject to soft tissue artifact (STA). Biplanar videoradiography offers a unique approach to collecting skeletal motion without STA. The goal of this study was to compare how STA affects the six-degree-of-freedom motion of the femur and tibia during a jump-cut maneuver associated with non-contact ACL injury. Ten volunteers performed a jump-cut maneuver while their landing leg was imaged using optical motion capture (OMC) and biplanar videoradiography. The within-bone motion differences were compared using anatomical coordinate systems for the femur and tibia, respectively. The knee joint kinematic measurements were compared during two periods: before and after ground contact. Over the entire activity, the within-bone motion differences between the two motion capture techniques were significantly lower for the tibia than the femur for two of the rotational axes (flexion/extension, internal/external) and the origin. The OMC and biplanar videoradiography knee joint kinematics were in best agreement before landing. Kinematic deviations between the two techniques increased significantly after contact. This study provides information on the kinematic discrepancies between OMC and biplanar videoradiography that can be used to optimize methods employing both technologies for studying dynamic in vivo knee kinematics and kinetics during a jump-cut maneuver. PMID:23084785
Sun, Jie; Li, Zhengdong; Pan, Shaoyou; Feng, Hao; Shao, Yu; Liu, Ningguo; Huang, Ping; Zou, Donghua; Chen, Yijiu
2018-05-01
The aim of the present study was to develop an improved method, using MADYMO multi-body simulation software combined with an optimization method and three-dimensional (3D) motion capture, for identifying the pre-impact conditions of a cyclist (walking or cycling) involved in a vehicle-bicycle accident. First, a 3D motion capture system was used to analyze coupled motions of a volunteer while walking and cycling. The motion capture results were used to define the posture of the human model during walking and cycling simulations. Then, cyclist, bicycle and vehicle models were developed. Pre-impact parameters of the models were treated as unknown design variables. Finally, a multi-objective genetic algorithm, the nondominated sorting genetic algorithm II, was used to find optimal solutions. The objective functions of the walk parameter were significantly lower than cycle parameter; thus, the cyclist was more likely to have been walking with the bicycle than riding the bicycle. In the most closely matched result found, all observed contact points matched and the injury parameters correlated well with the real injuries sustained by the cyclist. Based on the real accident reconstruction, the present study indicates that MADYMO multi-body simulation software, combined with an optimization method and 3D motion capture, can be used to identify the pre-impact conditions of a cyclist involved in a vehicle-bicycle accident. Copyright © 2018. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Knippenberg, Els; Verbrugghe, Jonas; Lamers, Ilse; Palmaers, Steven; Timmermans, Annick; Spooren, Annemie
2017-06-24
Client-centred task-oriented training is important in neurological rehabilitation but is time consuming and costly in clinical practice. The use of technology, especially motion capture systems (MCS) which are low cost and easy to apply in clinical practice, may be used to support this kind of training, but knowledge and evidence of their use for training is scarce. The present review aims to investigate 1) which motion capture systems are used as training devices in neurological rehabilitation, 2) how they are applied, 3) in which target population, 4) what the content of the training and 5) efficacy of training with MCS is. A computerised systematic literature review was conducted in four databases (PubMed, Cinahl, Cochrane Database and IEEE). The following MeSH terms and key words were used: Motion, Movement, Detection, Capture, Kinect, Rehabilitation, Nervous System Diseases, Multiple Sclerosis, Stroke, Spinal Cord, Parkinson Disease, Cerebral Palsy and Traumatic Brain Injury. The Van Tulder's Quality assessment was used to score the methodological quality of the selected studies. The descriptive analysis is reported by MCS, target population, training parameters and training efficacy. Eighteen studies were selected (mean Van Tulder score = 8.06 ± 3.67). Based on methodological quality, six studies were selected for analysis of training efficacy. Most commonly used MCS was Microsoft Kinect, training was mostly conducted in upper limb stroke rehabilitation. Training programs varied in intensity, frequency and content. None of the studies reported an individualised training program based on client-centred approach. Motion capture systems are training devices with potential in neurological rehabilitation to increase the motivation during training and may assist improvement on one or more International Classification of Functioning, Disability and Health (ICF) levels. Although client-centred task-oriented training is important in neurological rehabilitation, the client-centred approach was not included. Future technological developments should take up the challenge to combine MCS with the principles of a client-centred task-oriented approach and prove efficacy using randomised controlled trials with long-term follow-up. Prospero registration number 42016035582 .
A novel teaching system for industrial robots.
Lin, Hsien-I; Lin, Yu-Hsiang
2014-03-27
The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles.
A Novel Teaching System for Industrial Robots
Lin, Hsien-I; Lin, Yu-Hsiang
2014-01-01
The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles. PMID:24681669
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
Fixation not required: characterizing oculomotor attention capture for looming stimuli.
Lewis, Joanna E; Neider, Mark B
2015-10-01
A stimulus moving toward us, such as a ball being thrown in our direction or a vehicle braking suddenly in front of ours, often represents a stimulus that requires a rapid response. Using a visual search task in which target and distractor items were systematically associated with a looming object, we explored whether this sort of looming motion captures attention, the nature of such capture using eye movement measures (overt/covert), and the extent to which such capture effects are more closely tied to motion onset or the motion itself. We replicated previous findings indicating that looming motion induces response time benefits and costs during visual search Lin, Franconeri, & Enns(Psychological Science 19(7): 686-693, 2008). These differences in response times were independent of fixation, indicating that these capture effects did not necessitate overt attentional shifts to a looming object for search benefits or costs to occur. Interestingly, we found no differences in capture benefits and costs associated with differences in looming motion type. Combined, our results suggest that capture effects associated with looming motion are more likely subserved by covert attentional mechanisms rather than overt mechanisms, and attention capture for looming motion is likely related to motion itself rather than the onset of motion.
Accuracy of Jump-Mat Systems for Measuring Jump Height.
Pueo, Basilio; Lipinska, Patrycja; Jiménez-Olmedo, José M; Zmijewski, Piotr; Hopkins, Will G
2017-08-01
Vertical-jump tests are commonly used to evaluate lower-limb power of athletes and nonathletes. Several types of equipment are available for this purpose. To compare the error of measurement of 2 jump-mat systems (Chronojump-Boscosystem and Globus Ergo Tester) with that of a motion-capture system as a criterion and to determine the modifying effect of foot length on jump height. Thirty-one young adult men alternated 4 countermovement jumps with 4 squat jumps. Mean jump height and standard deviations representing technical error of measurement arising from each device and variability arising from the subjects themselves were estimated with a novel mixed model and evaluated via standardization and magnitude-based inference. The jump-mat systems produced nearly identical measures of jump height (differences in means and in technical errors of measurement ≤1 mm). Countermovement and squat-jump height were both 13.6 cm higher with motion capture (90% confidence limits ±0.3 cm), but this very large difference was reduced to small unclear differences when adjusted to a foot length of zero. Variability in countermovement and squat-jump height arising from the subjects was small (1.1 and 1.5 cm, respectively, 90% confidence limits ±0.3 cm); technical error of motion capture was similar in magnitude (1.7 and 1.6 cm, ±0.3 and ±0.4 cm), and that of the jump mats was similar or smaller (1.2 and 0.3 cm, ±0.5 and ±0.9 cm). The jump-mat systems provide trustworthy measurements for monitoring changes in jump height. Foot length can explain the substantially higher jump height observed with motion capture.
3D Measurement of Forearm and Upper Arm during Throwing Motion using Body Mounted Sensor
NASA Astrophysics Data System (ADS)
Koda, Hideharu; Sagawa, Koichi; Kuroshima, Kouta; Tsukamoto, Toshiaki; Urita, Kazutaka; Ishibashi, Yasuyuki
The aim of this study is to propose the measurement method of three-dimensional (3D) movement of forearm and upper arm during pitching motion of baseball using inertial sensors without serious consideration of sensor installation. Although high accuracy measurement of sports motion is achieved by using optical motion capture system at present, it has some disadvantages such as the calibration of cameras and limitation of measurement place. Whereas the proposed method for 3D measurement of pitching motion using body mounted sensors provides trajectory and orientation of upper arm by the integration of acceleration and angular velocity measured on upper limb. The trajectory of forearm is derived so that the elbow joint axis of forearm corresponds to that of upper arm. Spatial relation between upper limb and sensor system is obtained by performing predetermined movements of upper limb and utilizing angular velocity and gravitational acceleration. The integration error is modified so that the estimated final position, velocity and posture of upper limb agree with the actual ones. The experimental results of the measurement of pitching motion show that trajectories of shoulder, elbow and wrist estimated by the proposed method are highly correlated to those from the motion capture system within the estimation error of about 10 [%].
Hall, Emily A; Docherty, Carrie L
2017-07-01
To determine the concurrent validity of standard clinical outcome measures compared to laboratory outcome measure while performing the weight-bearing lunge test (WBLT). Cross-sectional study. Fifty participants performed the WBLT to determine dorsiflexion ROM using four different measurement techniques: dorsiflexion angle with digital inclinometer at 15cm distal to the tibial tuberosity (°), dorsiflexion angle with inclinometer at tibial tuberosity (°), maximum lunge distance (cm), and dorsiflexion angle using a 2D motion capture system (°). Outcome measures were recorded concurrently during each trial. To establish concurrent validity, Pearson product-moment correlation coefficients (r) were conducted, comparing each dependent variable to the 2D motion capture analysis (identified as the reference standard). A higher correlation indicates strong concurrent validity. There was a high correlation between each measurement technique and the reference standard. Specifically the correlation between the inclinometer placement at 15cm below the tibial tuberosity (44.9°±5.5°) and the motion capture angle (27.0°±6.0°) was r=0.76 (p=0.001), between the inclinometer placement at the tibial tuberosity angle (39.0°±4.6°) and the motion capture angle was r=0.71 (p=0.001), and between the distance from the wall clinical measure (10.3±3.0cm) to the motion capture angle was r=0.74 (p=0.001). This study determined that the clinical measures used during the WBLT have a high correlation with the reference standard for assessing dorsiflexion range of motion. Therefore, obtaining maximum lunge distance and inclinometer angles are both valid assessments during the weight-bearing lunge test. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Expressive facial animation synthesis by learning speech coarticulation and expression spaces.
Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth
2006-01-01
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.
Fast instantaneous center of rotation estimation algorithm for a skied-steered robot
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2015-05-01
Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.
Motion onset does not capture attention when subsequent motion is "smooth".
Sunny, Meera Mary; von Mühlenen, Adrian
2011-12-01
Previous research on the attentional effects of moving objects has shown that motion per se does not capture attention. However, in later studies it was argued that the onset of motion does capture attention. Here, we show that this motion-onset effect critically depends on motion jerkiness--that is, the rate at which the moving stimulus is refreshed. Experiment 1 used search displays with a static, a motion-onset, and an abrupt-onset stimulus, while systematically varying the refresh rate of the moving stimulus. The results showed that motion onset only captures attention when subsequent motion is jerky (8 and 17 Hz), not when it is smooth (33 and 100 Hz). Experiment 2 replaced motion onset with continuous motion, showing that motion jerkiness does not affect how continuous motion is processed. These findings do not support accounts that assume a special role for motion onset, but they are in line with the more general unique-event account.
GN/C translation and rotation control parameters for AR/C (category 2)
NASA Technical Reports Server (NTRS)
Henderson, David M.
1991-01-01
Detailed analysis of the Automatic Rendezvous and Capture problem indicate a need for three different regions of mathematical description for the GN&C algorithms: (1) multi-vehicle orbital mechanics to the rendezvous interface point, i.e., within 100 n.; (2) relative motion solutions (such as Clohessy-Wiltshire type) from the far-field to the near-field interface, i.e., within 1 nm; and (3) close proximity motion, the nearfield motion where the relative differences in the gravitational and orbit inertial accelerations can be neglected from the equations of motion. This paper defines the reference coordinate frames and control parameters necessary to model the relative motion and attitude of spacecraft in the close proximity of another space system (Region 2 and 3) during the Automatic Rendezvous and Capture phase of an orbit operation.
Data Fusion Based on Optical Technology for Observation of Human Manipulation
NASA Astrophysics Data System (ADS)
Falco, Pietro; De Maria, Giuseppe; Natale, Ciro; Pirozzi, Salvatore
2012-01-01
The adoption of human observation is becoming more and more frequent within imitation learning and programming by demonstration approaches (PbD) to robot programming. For robotic systems equipped with anthropomorphic hands, the observation phase is very challenging and no ultimate solution exists. This work proposes a novel mechatronic approach to the observation of human hand motion during manipulation tasks. The strategy is based on the combined use of an optical motion capture system and a low-cost data glove equipped with novel joint angle sensors, based on optoelectronic technology. The combination of the two information sources is obtained through a sensor fusion algorithm based on the extended Kalman filter (EKF) suitably modified to tackle the problem of marker occlusions, typical of optical motion capture systems. This approach requires a kinematic model of the human hand. Another key contribution of this work is a new method to calibrate this model.
2008-07-02
CAPE CANAVERAL, Fla. – NYIT MOCAP (Motion Capture) team Project Manager Jon Squitieri attaches a retro reflective marker to a motion capture suit worn by a technician who will be assembling the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.
Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach
NASA Technical Reports Server (NTRS)
Henderson, Steve
2005-01-01
Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.
Multi-Sensor Methods for Mobile Radar Motion Capture and Compensation
NASA Astrophysics Data System (ADS)
Nakata, Robert
Remote sensing has many applications, including surveying and mapping, geophysics exploration, military surveillance, search and rescue and counter-terrorism operations. Remote sensor systems typically use visible image, infrared or radar sensors. Camera based image sensors can provide high spatial resolution but are limited to line-of-sight capture during daylight. Infrared sensors have lower resolution but can operate during darkness. Radar sensors can provide high resolution motion measurements, even when obscured by weather, clouds and smoke and can penetrate walls and collapsed structures constructed with non-metallic materials up to 1 m to 2 m in depth depending on the wavelength and transmitter power level. However, any platform motion will degrade the target signal of interest. In this dissertation, we investigate alternative methodologies to capture platform motion, including a Body Area Network (BAN) that doesn't require external fixed location sensors, allowing full mobility of the user. We also investigated platform stabilization and motion compensation techniques to reduce and remove the signal distortion introduced by the platform motion. We evaluated secondary ultrasonic and radar sensors to stabilize the platform resulting in an average 5 dB of Signal to Interference Ratio (SIR) improvement. We also implemented a Digital Signal Processing (DSP) motion compensation algorithm that improved the SIR by 18 dB on average. These techniques could be deployed on a quadcopter platform and enable the detection of respiratory motion using an onboard radar sensor.
NASA Technical Reports Server (NTRS)
Pope, Alan T. (Inventor); Stephens, Chad L. (Inventor); Habowski, Tyler (Inventor)
2017-01-01
Method for physiologically modulating videogames and simulations includes utilizing input from a motion-sensing video game system and input from a physiological signal acquisition device. The inputs from the physiological signal sensors are utilized to change the response of a user's avatar to inputs from the motion-sensing sensors. The motion-sensing system comprises a 3D sensor system having full-body 3D motion capture of a user's body. This arrangement encourages health-enhancing physiological self-regulation skills or therapeutic amplification of healthful physiological characteristics. The system provides increased motivation for users to utilize biofeedback as may be desired for treatment of various conditions.
Construction of a patient observation system using KINECTTM
NASA Astrophysics Data System (ADS)
Miyaura, Kazunori; Kumazaki, Yu; Fukushima, Chika; Kato, Shingo; Saitoh, Hidetoshi
2014-03-01
Improvement in the positional accuracy of irradiation is expected by capturing patient motion (intra-fractional error) during irradiation. The present study reports the construction of a patient observation system using Microsoft® KINECTTM. By tracking movement, we made it possible to add a depth component to the acquired position coordinates and to display three-axis (X, Y, and Z) movement. Moreover, the developed system can be displayed in a graph which is constructed from the coordinate position at each time interval. Using the developed system, an observer can easily visualize patient movement. When the body phantom was moved a known distance in the X, Y, and Z directions, good coincidence was shown with each axis. We built a patient observation system which captures a patient's motion using KINECTTM.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Commercial Motion Sensor Based Low-Cost and Convenient Interactive Treadmill.
Kim, Jonghyun; Gravunder, Andrew; Park, Hyung-Soon
2015-09-17
Interactive treadmills were developed to improve the simulation of overground walking when compared to conventional treadmills. However, currently available interactive treadmills are expensive and inconvenient, which limits their use. We propose a low-cost and convenient version of the interactive treadmill that does not require expensive equipment and a complicated setup. As a substitute for high-cost sensors, such as motion capture systems, a low-cost motion sensor was used to recognize the subject's intention for speed changing. Moreover, the sensor enables the subject to make a convenient and safe stop using gesture recognition. For further cost reduction, the novel interactive treadmill was based on an inexpensive treadmill platform and a novel high-level speed control scheme was applied to maximize performance for simulating overground walking. Pilot tests with ten healthy subjects were conducted and results demonstrated that the proposed treadmill achieves similar performance to a typical, costly, interactive treadmill that contains a motion capture system and an instrumented treadmill, while providing a convenient and safe method for stopping.
A new position measurement system using a motion-capture camera for wind tunnel tests.
Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok
2013-09-13
Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements.
A New Position Measurement System Using a Motion-Capture Camera for Wind Tunnel Tests
Park, Hyo Seon; Kim, Ji Young; Kim, Jin Gi; Choi, Se Woon; Kim, Yousok
2013-01-01
Considering the characteristics of wind tunnel tests, a position measurement system that can minimize the effects on the flow of simulated wind must be established. In this study, a motion-capture camera was used to measure the displacement responses of structures in a wind tunnel test, and the applicability of the system was tested. A motion-capture system (MCS) could output 3D coordinates using two-dimensional image coordinates obtained from the camera. Furthermore, this remote sensing system had some flexibility regarding lab installation because of its ability to measure at relatively long distances from the target structures. In this study, we performed wind tunnel tests on a pylon specimen and compared the measured responses of the MCS with the displacements measured with a laser displacement sensor (LDS). The results of the comparison revealed that the time-history displacement measurements from the MCS slightly exceeded those of the LDS. In addition, we confirmed the measuring reliability of the MCS by identifying the dynamic properties (natural frequency, damping ratio, and mode shape) of the test specimen using system identification methods (frequency domain decomposition, FDD). By comparing the mode shape obtained using the aforementioned methods with that obtained using the LDS, we also confirmed that the MCS could construct a more accurate mode shape (bending-deflection mode shape) with the 3D measurements. PMID:24064600
Inertial Measurement Units for Clinical Movement Analysis: Reliability and Concurrent Validity
Nicholas, Kevin; Sparkes, Valerie; Sheeran, Liba; Davies, Jennifer L
2018-01-01
The aim of this study was to investigate the reliability and concurrent validity of a commercially available Xsens MVN BIOMECH inertial-sensor-based motion capture system during clinically relevant functional activities. A clinician with no prior experience of motion capture technologies and an experienced clinical movement scientist each assessed 26 healthy participants within each of two sessions using a camera-based motion capture system and the MVN BIOMECH system. Participants performed overground walking, squatting, and jumping. Sessions were separated by 4 ± 3 days. Reliability was evaluated using intraclass correlation coefficient and standard error of measurement, and validity was evaluated using the coefficient of multiple correlation and the linear fit method. Day-to-day reliability was generally fair-to-excellent in all three planes for hip, knee, and ankle joint angles in all three tasks. Within-day (between-rater) reliability was fair-to-excellent in all three planes during walking and squatting, and poor-to-high during jumping. Validity was excellent in the sagittal plane for hip, knee, and ankle joint angles in all three tasks and acceptable in frontal and transverse planes in squat and jump activity across joints. Our results suggest that the MVN BIOMECH system can be used by a clinician to quantify lower-limb joint angles in clinically relevant movements. PMID:29495600
Efficient Generation of Dancing Animation Synchronizing with Music Based on Meta Motion Graphs
NASA Astrophysics Data System (ADS)
Xu, Jianfeng; Takagi, Koichi; Sakazawa, Shigeyuki
This paper presents a system for automatic generation of dancing animation that is synchronized with a piece of music by re-using motion capture data. Basically, the dancing motion is synthesized according to the rhythm and intensity features of music. For this purpose, we propose a novel meta motion graph structure to embed the necessary features including both rhythm and intensity, which is constructed on the motion capture database beforehand. In this paper, we consider two scenarios for non-streaming music and streaming music, where global search and local search are required respectively. In the case of the former, once a piece of music is input, the efficient dynamic programming algorithm can be employed to globally search a best path in the meta motion graph, where an objective function is properly designed by measuring the quality of beat synchronization, intensity matching, and motion smoothness. In the case of the latter, the input music is stored in a buffer in a streaming mode, then an efficient search method is presented for a certain amount of music data (called a segment) in the buffer with the same objective function, resulting in a segment-based search approach. For streaming applications, we define an additional property in the above meta motion graph to deal with the unpredictable future music, which guarantees that there is some motion to match the unknown remaining music. A user study with totally 60 subjects demonstrates that our system outperforms the stat-of-the-art techniques in both scenarios. Furthermore, our system improves the synthesis speed greatly (maximal speedup is more than 500 times), which is essential for mobile applications. We have implemented our system on commercially available smart phones and confirmed that it works well on these mobile phones.
ERIC Educational Resources Information Center
Umino, Bin; Longstaff, Jeffrey Scott; Soga, Asako
2009-01-01
This paper reports on "Web3D dance composer" for ballet e-learning. Elementary "petit allegro" ballet steps were enumerated in collaboration with ballet teachers, digitally acquired through 3D motion capture systems, and categorised into families and sub-families. Digital data was manipulated into virtual reality modelling language (VRML) and fit…
Alert Response to Motion Onset in the Retina
Chen, Eric Y.; Marre, Olivier; Fisher, Clark; Schwartz, Greg; Levy, Joshua; da Silveira, Rava Azeredo
2013-01-01
Previous studies have shown that motion onset is very effective at capturing attention and is more salient than smooth motion. Here, we find that this salience ranking is present already in the firing rate of retinal ganglion cells. By stimulating the retina with a bar that appears, stays still, and then starts moving, we demonstrate that a subset of salamander retinal ganglion cells, fast OFF cells, responds significantly more strongly to motion onset than to smooth motion. We refer to this phenomenon as an alert response to motion onset. We develop a computational model that predicts the time-varying firing rate of ganglion cells responding to the appearance, onset, and smooth motion of a bar. This model, termed the adaptive cascade model, consists of a ganglion cell that receives input from a layer of bipolar cells, represented by individual rectified subunits. Additionally, both the bipolar and ganglion cells have separate contrast gain control mechanisms. This model captured the responses to our different motion stimuli over a wide range of contrasts, speeds, and locations. The alert response to motion onset, together with its computational model, introduces a new mechanism of sophisticated motion processing that occurs early in the visual system. PMID:23283327
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Active eye-tracking for an adaptive optics scanning laser ophthalmoscope
Sheehy, Christy K.; Tiruveedhula, Pavan; Sabesan, Ramkumar; Roorda, Austin
2015-01-01
We demonstrate a system that combines a tracking scanning laser ophthalmoscope (TSLO) and an adaptive optics scanning laser ophthalmoscope (AOSLO) system resulting in both optical (hardware) and digital (software) eye-tracking capabilities. The hybrid system employs the TSLO for active eye-tracking at a rate up to 960 Hz for real-time stabilization of the AOSLO system. AOSLO videos with active eye-tracking signals showed, at most, an amplitude of motion of 0.20 arcminutes for horizontal motion and 0.14 arcminutes for vertical motion. Subsequent real-time digital stabilization limited residual motion to an average of only 0.06 arcminutes (a 95% reduction). By correcting for high amplitude, low frequency drifts of the eye, the active TSLO eye-tracking system enabled the AOSLO system to capture high-resolution retinal images over a larger range of motion than previously possible with just the AOSLO imaging system alone. PMID:26203370
Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.
Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup
2017-02-01
We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.
Ubiquitous human upper-limb motion estimation using wearable sensors.
Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang
2011-07-01
Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.
NASA Astrophysics Data System (ADS)
Dong, Gangqi; Zhu, Z. H.
2016-04-01
This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.
Nonlinear finite element analysis of liquid sloshing in complex vehicle motion scenarios
NASA Astrophysics Data System (ADS)
Nicolsen, Brynne; Wang, Liang; Shabana, Ahmed
2017-09-01
The objective of this investigation is to develop a new total Lagrangian continuum-based liquid sloshing model that can be systematically integrated with multibody system (MBS) algorithms in order to allow for studying complex motion scenarios. The new approach allows for accurately capturing the effect of the sloshing forces during curve negotiation, rapid lane change, and accelerating and braking scenarios. In these motion scenarios, the liquid experiences large displacements and significant changes in shape that can be captured effectively using the finite element (FE) absolute nodal coordinate formulation (ANCF). ANCF elements are used in this investigation to describe complex mesh geometries, to capture the change in inertia due to the change in the fluid shape, and to accurately calculate the centrifugal forces, which for flexible bodies do not take the simple form used in rigid body dynamics. A penalty formulation is used to define the contact between the rigid tank walls and the fluid. A fully nonlinear MBS truck model that includes a suspension system and Pacejka's brush tire model is developed. Specified motion trajectories are used to examine the vehicle dynamics in three different scenarios - deceleration during straight-line motion, rapid lane change, and curve negotiation. It is demonstrated that the liquid sloshing changes the contact forces between the tires and the ground - increasing the forces on certain wheels and decreasing the forces on other wheels. In cases of extreme sloshing, this dynamic behavior can negatively impact the vehicle stability by increasing the possibility of wheel lift and vehicle rollover.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
A new 4-dimensional imaging system for jaw tracking.
Lauren, Mark
2014-01-01
A non-invasive 4D imaging system that produces high resolution time-based 3D surface data has been developed to capture jaw motion. Fluorescent microspheres are brushed onto both tooth and soft-tissue areas of the upper and lower arches to be imaged. An extraoral hand-held imaging device, operated about 12 cm from the mouth, captures a time-based set of perspective image triplets of the patch areas. Each triplet, containing both upper and lower arch data, is converted to a high-resolution 3D point mesh using photogrammetry, providing the instantaneous relative jaw position. Eight 3D positions per second are captured. Using one of the 3D frames as a reference, a 4D model can be constructed to describe the incremental free body motion of the mandible. The surface data produced by this system can be registered to conventional 3D models of the dentition, allowing them to be animated. Applications include integration into prosthetic CAD and CBCT data.
Design and development of an upper extremity motion capture system for a rehabilitation robot.
Nanda, Pooja; Smith, Alan; Gebregiorgis, Adey; Brown, Edward E
2009-01-01
Human robot interaction is a new and rapidly growing field and its application in the realm of rehabilitation and physical care is a major focus area of research worldwide. This paper discusses the development and implementation of a wireless motion capture system for the human arm which can be used for physical therapy or real-time control of a robotic arm, among many other potential applications. The system is comprised of a mechanical brace with rotary potentiometers inserted at the different joints to capture position data. It also contains surface electrodes which acquire electromyographic signals through the CleveMed BioRadio device. The brace interfaces with a software subsystem which displays real time data signals. The software includes a 3D arm model which imitates the actual movement of a subject's arm under testing. This project began as part of the Rochester Institute of Technology's Undergraduate Multidisciplinary Senior Design curriculum and has been integrated into the overall research objectives of the Biomechatronic Learning Laboratory.
Nichols, Julia K; O'Reilly, Oliver M
2017-03-01
Biomechanics software programs, such as Visual3D, Nexus, Cortex, and OpenSim, have the capability of generating several distinct component representations for joint moments and forces from motion capture data. These representations include those for orthonormal proximal and distal coordinate systems and a non-orthogonal joint coordinate system. In this article, a method is presented to address the challenging problem of evaluating and verifying the equivalence of these representations. The method accommodates the difficulty that there are two possible sets of non-orthogonal basis vectors that can be used to express a vector in the joint coordinate system and is illuminated using motion capture data from a drop vertical jump task. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Searcy, Brittani
2017-01-01
Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.
Dynamics analysis of microsphere in a dual-beam fiber-optic trap with transverse offset.
Chen, Xinlin; Xiao, Guangzong; Luo, Hui; Xiong, Wei; Yang, Kaiyong
2016-04-04
A comprehensive dynamics analysis of microsphere has been presented in a dual-beam fiber-optic trap with transverse offset. As the offset distance between two counterpropagating beams increases, the motion type of the microsphere starts with capture, then spiral motion, then orbital rotation, and ends with escape. We analyze the transformation process and mechanism of the four motion types based on ray optics approximation. Dynamic simulations show that the existence of critical offset distances at which different motion types transform. The result is an important step toward explaining physical phenomena in a dual-beam fiber-optic trap with transverse offset, and is generally applicable to achieving controllable motions of microspheres in integrated systems, such as microfluidic systems and lab-on-a-chip systems.
2008-07-02
CAPE CANAVERAL, Fla. – Professor Peter Voci, NYIT MOCAP (Motion Capture) team director, (left) hands a component of the Orion Crew Module mockup to one of three technicians inside the mockup. The technicians wear motion capture suits. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.
NASA Astrophysics Data System (ADS)
Xu, Wenrui; Lai, Dong
2017-07-01
Recent observations of Kepler multiplanet systems have revealed a number of systems with planets very close to second-order mean motion resonances (MMRs, with period ratio 1 : 3, 3 : 5, etc.). We present an analytic study of resonance capture and its stability for planets migrating in gaseous discs. Resonance capture requires slow convergent migration of the planets, with sufficiently large eccentricity damping time-scale Te and small pre-resonance eccentricities. We quantify these requirements and find that they can be satisfied for super-Earths under protoplanetary disc conditions. For planets captured into resonance, an equilibrium state can be reached, in which eccentricity excitation due to resonant planet-planet interaction balances eccentricity damping due to planet-disc interaction. This 'captured' equilibrium can be overstable, leading to partial or permanent escape of the planets from the resonance. In general, the stability of the captured state depends on the inner to outer planet mass ratio q = m1/m2 and the ratio of the eccentricity damping times. The overstability growth time is of the order of Te, but can be much larger for systems close to the stability threshold. For low-mass planets undergoing type I (non-gap opening) migration, convergent migration requires q ≲ 1, while the stability of the capture requires q ≳ 1. These results suggest that planet pairs stably captured into second-order MMRs have comparable masses. This is in contrast to first-order MMRs, where a larger parameter space exists for stable resonance capture. We confirm and extend our analytical results with N-body simulations, and show that for overstable capture, the escape time from the MMR can be comparable to the time the planets spend migrating between resonances.
Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data
NASA Astrophysics Data System (ADS)
Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia
Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.
3D kinematic measurement of human movement using low cost fish-eye cameras
NASA Astrophysics Data System (ADS)
Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.
2017-02-01
3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.
Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan
2016-11-15
Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject's movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation.
Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.
2018-01-01
Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370
Toward an affordable and user-friendly visual motion capture system.
Bonnet, V; Sylla, N; Cherubini, A; Gonzáles, A; Azevedo Coste, C; Fraisse, P; Venture, G
2014-01-01
The present study aims at designing and evaluating a low-cost, simple and portable system for arm joint angle estimation during grasping-like motions. The system is based on a single RGB-D camera and three customized markers. The automatically detected and tracked marker positions were used as inputs to an offline inverse kinematic process based on bio-mechanical constraints to reduce noise effect and handle marker occlusion. The method was validated on 4 subjects with different motions. The joint angles were estimated both with the proposed low-cost system and, a stereophotogrammetric system. Comparative analysis shows good accuracy with high correlation coefficient (r= 0.92) and low average RMS error (3.8 deg).
Motion-Capture-Enabled Software for Gestural Control of 3D Models
NASA Technical Reports Server (NTRS)
Norris, Jeffrey S.; Luo, Victor; Crockett, Thomas M.; Shams, Khawaja S.; Powell, Mark W.; Valderrama, Anthony
2012-01-01
Current state-of-the-art systems use general-purpose input devices such as a keyboard, mouse, or joystick that map to tasks in unintuitive ways. This software enables a person to control intuitively the position, size, and orientation of synthetic objects in a 3D virtual environment. It makes possible the simultaneous control of the 3D position, scale, and orientation of 3D objects using natural gestures. Enabling the control of 3D objects using a commercial motion-capture system allows for natural mapping of the many degrees of freedom of the human body to the manipulation of the 3D objects. It reduces training time for this kind of task, and eliminates the need to create an expensive, special-purpose controller.
Watanabe, Toshiki; Omata, Sadao; Odamura, Motoki; Okada, Masahumi; Nakamura, Yoshihiko; Yokoyama, Hitoshi
2006-11-01
This study aimed to evaluate our newly developed 3-dimensional digital motion-capture and reconstruction system in an animal experiment setting and to characterize quantitatively the three regional cardiac surface motions, in the left anterior descending artery, right coronary artery, and left circumflex artery, before and after stabilization using a stabilizer. Six pigs underwent a full sternotomy. Three tiny metallic markers (diameter 2 mm) coated with a reflective material were attached on three regional cardiac surfaces (left anterior descending, right coronary, and left circumflex coronary artery regions). These markers were captured by two high-speed digital video cameras (955 frames per second) as 2-dimensional coordinates and reconstructed to 3-dimensional data points (about 480 xyz-position data per second) by a newly developed computer program. The remaining motion after stabilization ranged from 0.4 to 1.01 mm at the left anterior descending, 0.91 to 1.52 mm at the right coronary artery, and 0.53 to 1.14 mm at the left circumflex regions. Significant differences before and after stabilization were evaluated in maximum moving velocity (left anterior descending 456.7 +/- 178.7 vs 306.5 +/- 207.4 mm/s; right coronary artery 574.9 +/- 161.7 vs 446.9 +/- 170.7 mm/s; left circumflex 578.7 +/- 226.7 vs 398.9 +/- 192.6 mm/s; P < .0001) and maximum acceleration (left anterior descending 238.8 +/- 137.4 vs 169.4 +/- 132.7 m/s2; right coronary artery 315.0 +/- 123.9 vs 242.9 +/- 120.6 m/s2; left circumflex 307.9 +/- 151.0 vs 217.2 +/- 132.3 m/s2; P < .0001). This system is useful for a precise quantification of the heart surface movement. This helps us better understand the complexity of the heart, its motion, and the need for developing a better stabilizer for beating heart surgery.
Hamiltonian Dynamics of Spider-Type Multirotor Rigid Bodies Systems
NASA Astrophysics Data System (ADS)
Doroshin, Anton V.
2010-03-01
This paper sets out to develop a spider-type multiple-rotor system which can be used for attitude control of spacecraft. The multirotor system contains a large number of rotor-equipped rays, so it was called a ``Spider-type System,'' also it can be called ``Rotary Hedgehog.'' These systems allow using spinups and captures of conjugate rotors to perform compound attitude motion of spacecraft. The paper describes a new method of spacecraft attitude reorientation and new mathematical model of motion in Hamilton form. Hamiltonian dynamics of the system is investigated with the help of Andoyer-Deprit canonical variables. These variables allow obtaining exact solution for hetero- and homoclinic orbits in phase space of the system motion, which are very important for qualitative analysis.
Geometric Brownian Motion with Tempered Stable Waiting Times
NASA Astrophysics Data System (ADS)
Gajda, Janusz; Wyłomańska, Agnieszka
2012-08-01
One of the earliest system that was used to asset prices description is Black-Scholes model. It is based on geometric Brownian motion and was used as a tool for pricing various financial instruments. However, when it comes to data description, geometric Brownian motion is not capable to capture many properties of present financial markets. One can name here for instance periods of constant values. Therefore we propose an alternative approach based on subordinated tempered stable geometric Brownian motion which is a combination of the popular geometric Brownian motion and inverse tempered stable subordinator. In this paper we introduce the mentioned process and present its main properties. We propose also the estimation procedure and calibrate the analyzed system to real data.
2008-07-02
CAPE CANAVERAL, Fla. –David Voci, NYIT MOCAP (Motion Capture) team co-director (seated at the workstation in the background) prepares to direct a motion capture session assisted by Kennedy Advanced Visualizations Environment staff led by Brad Lawrence (not pictured) and by Lora Ridgwell from United Space Alliance Human Factors (foreground, left). Ridgwell will help assemble the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collis, Scott; Protat, Alain; May, Peter T.
2013-08-01
Comparisons between direct measurements and modeled values of vertical air motions in precipitating systems are complicated by differences in temporal and spatial scales. On one hand, vertically profiling radars more directly measure the vertical air motion but do not adequately capture full storm dynamics. On the other hand, vertical air motions retrieved from two or more scanning Doppler radars capture the full storm dynamics but require model constraints that may not capture all updraft features because of inadequate sampling, resolution, numerical constraints, and the fact that the storm is evolving as it is scanned by the radars. To investigate themore » veracity of radar-based retrievals, which can be used to verify numerically modeled vertical air motions, this article presents several case studies from storm events around Darwin, Northern Territory, Australia, in which measurements from a dual-frequency radar profiler system and volumetric radar-based wind retrievals are compared. While a direct comparison was not possible because of instrumentation location, an indirect comparison shows promising results, with volume retrievals comparing well to those obtained from the profiling system. This prompted a statistical analysis of an extended period of an active monsoon period during the Tropical Warm Pool International Cloud Experiment (TWP-ICE). Results show less vigorous deep convective cores with maximum updraft velocities occurring at lower heights than some cloudresolving modeling studies suggest. 1. Introduction The regionalization of global climate models has been a driver of demand for more complex convective parameterization schemes. A key readjustment of the modeled atmosphere« less
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
NASA Astrophysics Data System (ADS)
Hayakawa, Tomohiko; Moko, Yushi; Morishita, Kenta; Ishikawa, Masatoshi
2018-04-01
In this paper, we propose a pixel-wise deblurring imaging (PDI) system based on active vision for compensation of the blur caused by high-speed one-dimensional motion between a camera and a target. The optical axis is controlled by back-and-forth motion of a galvanometer mirror to compensate the motion. High-spatial-resolution image captured by our system in high-speed motion is useful for efficient and precise visual inspection, such as visually judging abnormal parts of a tunnel surface to prevent accidents; hence, we applied the PDI system for structural health monitoring. By mounting the system onto a vehicle in a tunnel, we confirmed significant improvement in image quality for submillimeter black-and-white stripes and real tunnel-surface cracks at a speed of 100 km/h.
An optimal control strategy for two-dimensional motion camouflage with non-holonimic constraints.
Rañó, Iñaki
2012-07-01
Motion camouflage is a stealth behaviour observed both in hover-flies and in dragonflies. Existing controllers for mimicking motion camouflage generate this behaviour on an empirical basis or without considering the kinematic motion restrictions present in animal trajectories. This study summarises our formal contributions to solve the generation of motion camouflage as a non-linear optimal control problem. The dynamics of the system capture the kinematic restrictions to motion of the agents, while the performance index ensures camouflage trajectories. An extensive set of simulations support the technique, and a novel analysis of the obtained trajectories contributes to our understanding of possible mechanisms to obtain sensor based motion camouflage, for instance, in mobile robots.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
Gleadhill, Sam; Lee, James Bruce; James, Daniel
2016-05-03
This research presented and validated a method of assessing postural changes during resistance exercise using inertial sensors. A simple lifting task was broken down to a series of well-defined tasks, which could be examined and measured in a controlled environment. The purpose of this research was to determine whether timing measures obtained from inertial sensor accelerometer outputs are able to provide accurate, quantifiable information of resistance exercise movement patterns. The aim was to complete a timing measure validation of inertial sensor outputs. Eleven participants completed five repetitions of 15 different deadlift variations. Participants were monitored with inertial sensors and an infrared three dimensional motion capture system. Validation was undertaken using a Will Hopkins Typical Error of the Estimate, with a Pearson׳s correlation and a Bland Altman Limits of Agreement analysis. Statistical validation measured the timing agreement during deadlifts, from inertial sensor outputs and the motion capture system. Timing validation results demonstrated a Pearson׳s correlation of 0.9997, with trivial standardised error (0.026) and standardised bias (0.002). Inertial sensors can now be used in practical settings with as much confidence as motion capture systems, for accelerometer timing measurements of resistance exercise. This research provides foundations for inertial sensors to be applied for qualitative activity recognition of resistance exercise and safe lifting practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Verification and compensation of respiratory motion using an ultrasound imaging system.
Chuang, Ho-Chiao; Hsu, Hsiao-Yu; Chiu, Wei-Hung; Tien, Der-Chi; Wu, Ren-Hong; Hsu, Chung-Hsien
2015-03-01
The purpose of this study was to determine if it is feasible to use ultrasound imaging as an aid for moving the treatment couch during diagnosis and treatment procedures associated with radiation therapy, in order to offset organ displacement caused by respiratory motion. A noninvasive ultrasound system was used to replace the C-arm device during diagnosis and treatment with the aims of reducing the x-ray radiation dose on the human body while simultaneously being able to monitor organ displacements. This study used a proposed respiratory compensating system combined with an ultrasound imaging system to monitor the compensation effect of respiratory motion. The accuracy of the compensation effect was verified by fluoroscopy, which means that fluoroscopy could be replaced so as to reduce unnecessary radiation dose on patients. A respiratory simulation system was used to simulate the respiratory motion of the human abdomen and a strain gauge (respiratory signal acquisition device) was used to capture the simulated respiratory signals. The target displacements could be detected by an ultrasound probe and used as a reference for adjusting the gain value of the respiratory signal used by the respiratory compensating system. This ensured that the amplitude of the respiratory compensation signal was a faithful representation of the target displacement. The results show that performing respiratory compensation with the assistance of the ultrasound images reduced the compensation error of the respiratory compensating system to 0.81-2.92 mm, both for sine-wave input signals with amplitudes of 5, 10, and 15 mm, and human respiratory signals; this represented compensation of the respiratory motion by up to 92.48%. In addition, the respiratory signals of 10 patients were captured in clinical trials, while their diaphragm displacements were observed simultaneously using ultrasound. Using the respiratory compensating system to offset, the diaphragm displacement resulted in compensation rates of 60%-84.4%. This study has shown that a respiratory compensating system combined with noninvasive ultrasound can provide real-time compensation of the respiratory motion of patients.
Accurate visible speech synthesis based on concatenating variable length motion capture data.
Ma, Jiyong; Cole, Ron; Pellom, Bryan; Ward, Wayne; Wise, Barbara
2006-01-01
We present a novel approach to synthesizing accurate visible speech based on searching and concatenating optimal variable-length units in a large corpus of motion capture data. Based on a set of visual prototypes selected on a source face and a corresponding set designated for a target face, we propose a machine learning technique to automatically map the facial motions observed on the source face to the target face. In order to model the long distance coarticulation effects in visible speech, a large-scale corpus that covers the most common syllables in English was collected, annotated and analyzed. For any input text, a search algorithm to locate the optimal sequences of concatenated units for synthesis is desrcribed. A new algorithm to adapt lip motions from a generic 3D face model to a specific 3D face model is also proposed. A complete, end-to-end visible speech animation system is implemented based on the approach. This system is currently used in more than 60 kindergarten through third grade classrooms to teach students to read using a lifelike conversational animated agent. To evaluate the quality of the visible speech produced by the animation system, both subjective evaluation and objective evaluation are conducted. The evaluation results show that the proposed approach is accurate and powerful for visible speech synthesis.
Stinton, S K; Siebold, R; Freedberg, H; Jacobs, C; Branch, T P
2016-03-01
The purpose of this study was to: (1) determine whether a robotic tibial rotation device and an electromagnetic tracking system could accurately reproduce the clinical dial test at 30° of knee flexion; (2) compare rotation data captured at the footplates of the robotic device to tibial rotation data measured using an electromagnetic sensor on the proximal tibia. Thirty-two unilateral ACL-reconstructed patients were examined using a robotic tibial rotation device that mimicked the dial test. The data reported in this study is only from the healthy legs of these patients. Torque was applied through footplates and was measured using servomotors. Lower leg motion was measured at the foot using the motors. Tibial motion was also measured through an electromagnetic tracking system and a sensor on the proximal tibia. Load-deformation curves representing rotational motion of the foot and tibia were compared using Pearson's correlation coefficients. Off-axis motions including medial-lateral translation and anterior-posterior translation were also measured using the electromagnetic system. The robotic device and electromagnetic system were able to provide axial rotation data and translational data for the tibia during the dial test. Motion measured at the foot was not correlated to motion of the tibial tubercle in internal rotation or in external rotation. The position of the tibial tubercle was 26.9° ± 11.6° more internally rotated than the foot at torque 0 Nm. Medial-lateral translation and anterior-posterior translation were combined to show the path of the tubercle in the coronal plane during tibial rotation. The information captured during a manual dial test includes both rotation of the tibia and proximal tibia translation. All of this information can be captured using a robotic tibial axial rotation device with an electromagnetic tracking system. The pathway of the tibial tubercle during tibial axial rotation can provide additional information about knee instability without relying on side-to-side comparison between knees. The translation of the proximal tibia is important information that must be considered in addition to axial rotation of the tibia when performing a dial test whether done manually or with a robotic device. Instrumented foot position cannot provide the same information. IV.
Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis.
Pfister, Alexandra; West, Alexandre M; Bronner, Shaw; Noah, Jack Adam
2014-07-01
Biomechanical analysis is a powerful tool in the evaluation of movement dysfunction in orthopaedic and neurologic populations. Three-dimensional (3D) motion capture systems are widely used, accurate systems, but are costly and not available in many clinical settings. The Microsoft Kinect™ has the potential to be used as an alternative low-cost motion analysis tool. The purpose of this study was to assess concurrent validity of the Kinect™ with Brekel Kinect software in comparison to Vicon Nexus during sagittal plane gait kinematics. Twenty healthy adults (nine male, 11 female) were tracked while walking and jogging at three velocities on a treadmill. Concurrent hip and knee peak flexion and extension and stride timing measurements were compared between Vicon and Kinect™. Although Kinect measurements were representative of normal gait, the Kinect™ generally under-estimated joint flexion and over-estimated extension. Kinect™ and Vicon hip angular displacement correlation was very low and error was large. Kinect™ knee measurements were somewhat better than hip, but were not consistent enough for clinical assessment. Correlation between Kinect™ and Vicon stride timing was high and error was fairly small. Variability in Kinect™ measurements was smallest at the slowest velocity. The Kinect™ has basic motion capture capabilities and with some minor adjustments will be an acceptable tool to measure stride timing, but sophisticated advances in software and hardware are necessary to improve Kinect™ sensitivity before it can be implemented for clinical use.
Real-time animation software for customized training to use motor prosthetic systems.
Davoodi, Rahman; Loeb, Gerald E
2012-03-01
Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.
Hamiltonian Dynamics of Spider-Type Multirotor Rigid Bodies Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doroshin, Anton V.
2010-03-01
This paper sets out to develop a spider-type multiple-rotor system which can be used for attitude control of spacecraft. The multirotor system contains a large number of rotor-equipped rays, so it was called a 'Spider-type System', also it can be called 'Rotary Hedgehog'. These systems allow using spinups and captures of conjugate rotors to perform compound attitude motion of spacecraft. The paper describes a new method of spacecraft attitude reorientation and new mathematical model of motion in Hamilton form. Hamiltonian dynamics of the system is investigated with the help of Andoyer-Deprit canonical variables. These variables allow obtaining exact solution formore » hetero- and homoclinic orbits in phase space of the system motion, which are very important for qualitative analysis.« less
Computational simulation of extravehicular activity dynamics during a satellite capture attempt.
Schaffner, G; Newman, D J; Robinson, S K
2000-01-01
A more quantitative approach to the analysis of astronaut extravehicular activity (EVA) tasks is needed because of their increasing complexity, particularly in preparation for the on-orbit assembly of the International Space Station. Existing useful EVA computer analyses produce either high-resolution three-dimensional computer images based on anthropometric representations or empirically derived predictions of astronaut strength based on lean body mass and the position and velocity of body joints but do not provide multibody dynamic analysis of EVA tasks. Our physics-based methodology helps fill the current gap in quantitative analysis of astronaut EVA by providing a multisegment human model and solving the equations of motion in a high-fidelity simulation of the system dynamics. The simulation work described here improves on the realism of previous efforts by including three-dimensional astronaut motion, incorporating joint stops to account for the physiological limits of range of motion, and incorporating use of constraint forces to model interaction with objects. To demonstrate the utility of this approach, the simulation is modeled on an actual EVA task, namely, the attempted capture of a spinning Intelsat VI satellite during STS-49 in May 1992. Repeated capture attempts by an EVA crewmember were unsuccessful because the capture bar could not be held in contact with the satellite long enough for the capture latches to fire and successfully retrieve the satellite.
NASA Astrophysics Data System (ADS)
Zhang, Xin; Liu, Jinguo
2018-07-01
Although many motion planning strategies for missions involving space robots capturing floating targets can be found in the literature, relatively little has discussed how to select the berth position where the spacecraft base hovers. In fact, the berth position is a flexible and controllable factor, and selecting a suitable berth position has a great impact on improving the efficiency of motion planning in the capture mission. Therefore, to make full use of the manoeuvrability of the space robot, this paper proposes a new viewpoint that utilizes the base berth position as an optimizable parameter to formulate a more comprehensive and effective motion planning strategy. Considering the dynamic coupling, the dynamic singularities, and the physical limitations of space robots, a unified motion planning framework based on the forward kinematics and parameter optimization technique is developed to convert the planning problem into the parameter optimization problem. For getting rid of the strict grasping position constraints in the capture mission, a new conception of grasping area is proposed to greatly simplify the difficulty of the motion planning. Furthermore, by utilizing the penalty function method, a new concise objective function is constructed. Here, the intelligent algorithm, Particle Swarm Optimization (PSO), is worked as solver to determine the free parameters. Two capturing cases, i.e., capturing a two-dimensional (2D) planar target and capturing a three-dimensional (3D) spatial target, are studied under this framework. The corresponding simulation results demonstrate that the proposed method is more efficient and effective for planning the capture missions.
Resonant Capture and Tidal Evolution in Circumbinary Systems: Testing the Case of Kepler-38
NASA Astrophysics Data System (ADS)
Zoppetti, F. A.; Beaugé, C.; Leiva, A. M.
2018-04-01
Circumbinary planets are thought to form far from the central binary and migrate inwards by interactions with the circumbinary disk, ultimately stopping near their present location either by a planetary trap near the disk inner edge or by resonance capture. Here, we analyze the second possibility, presenting a detailed numerical study on the capture process, resonant dynamics and tidal evolution of circumbinary planets in high-order mean-motion resonances (MMRs). Planetary migration was modeled as an external acceleration in an N-body code, while tidal effects were incorporated with a weak-friction equilibrium tide model. As a working example we chose Kepler-38, a highly evolved system with a planet in the vicinity of the 5/1 MMR. Our simulations show that resonance capture is a high-probability event under a large range of system parameters, although several different resonant configuration are possible. We identified three possible outcomes: aligned librations, anti-aligned librations and chaotic solutions. All were found to be dynamically stable, even after the dissipation of the disk, for time-spans of the order of the system's age. We found that while tidal evolution decreases the binary's separation, the semimajor axis of the planet is driven outwards. Although the net effect is a secular increase in the mean-motion ratio, the system requires a planetary tidal parameter of the order of unity to reproduce the observed orbital configuration. The results presented here open an interesting outlook into the complex dynamics of high-order resonances in circumbinary systems.
Resonant capture and tidal evolution in circumbinary systems: testing the case of Kepler-38
NASA Astrophysics Data System (ADS)
Zoppetti, F. A.; Beaugé, C.; Leiva, A. M.
2018-07-01
Circumbinary planets are thought to form far from the central binary and migrate inwards by interactions with the circumbinary disc, ultimately stopping near their present location either by a planetary trap near the disc inner edge or by resonance capture. Here, we analyse the second possibility, presenting a detailed numerical study on the capture process, resonant dynamics, and tidal evolution of circumbinary planets in high-order mean-motion resonances (MMRs). Planetary migration was modelled as an external acceleration in an N-body code, while tidal effects were incorporated with a weak-friction equilibrium tide model. As a working example, we chose Kepler-38, a highly evolved system with a planet in the vicinity of the 5/1 MMR. Our simulations show that resonance capture is a high-probability event under a large range of system parameters, although several different resonant configuration are possible. We identified three possible outcomes: aligned librations, anti-aligned librations, and chaotic solutions. All were found to be dynamically stable, even after the dissipation of the disc, for time spans of the order of the system's age. We found that while tidal evolution decreases the binary's separation, the semimajor axis of the planet is driven outwards. Although the net effect is a secular increase in the mean-motion ratio, the system requires a planetary tidal parameter of the order of unity to reproduce the observed orbital configuration. The results presented here open an interesting outlook into the complex dynamics of high-order resonances in circumbinary systems.
Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi
2015-01-01
We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.
Projectile Motion on an Inclined Misty Surface: I. Capturing and Analysing the Trajectory
ERIC Educational Resources Information Center
Ho, S. Y.; Foong, S. K.; Lim, C. H.; Lim, C. C.; Lin, K.; Kuppan, L.
2009-01-01
Projectile motion is usually the first non-uniform two-dimensional motion that students will encounter in a pre-university physics course. In this article, we introduce a novel technique for capturing the trajectory of projectile motion on an inclined Perspex plane. This is achieved by coating the Perspex with a thin layer of fine water droplets…
FuryExplorer: visual-interactive exploration of horse motion capture data
NASA Astrophysics Data System (ADS)
Wilhelm, Nils; Vögele, Anna; Zsoldos, Rebeka; Licka, Theresia; Krüger, Björn; Bernard, Jürgen
2015-01-01
The analysis of equine motion has a long tradition in the past of mankind. Equine biomechanics aims at detecting characteristics of horses indicative of good performance. Especially, veterinary medicine gait analysis plays an important role in diagnostics and in the emerging research of long-term effects of athletic exercises. More recently, the incorporation of motion capture technology contributed to an easier and faster analysis, with a trend from mere observation of horses towards the analysis of multivariate time-oriented data. However, due to the novelty of this topic being raised within an interdisciplinary context, there is yet a lack of visual-interactive interfaces to facilitate time series data analysis and information discourse for the veterinary and biomechanics communities. In this design study, we bring visual analytics technology into the respective domains, which, to our best knowledge, was never approached before. Based on requirements developed in the domain characterization phase, we present a visual-interactive system for the exploration of horse motion data. The system provides multiple views which enable domain experts to explore frequent poses and motions, but also to drill down to interesting subsets, possibly containing unexpected patterns. We show the applicability of the system in two exploratory use cases, one on the comparison of different gait motions, and one on the analysis of lameness recovery. Finally, we present the results of a summative user study conducted in the environment of the domain experts. The overall outcome was a significant improvement in effectiveness and efficiency in the analytical workflow of the domain experts.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
Prasad, Nikhil K; Coleman Wood, Krista A; Spinner, Robert J; Kaufman, Kenton R
The assessment of neuromuscular recovery after peripheral nerve surgery has typically been a subjective physical examination. The purpose of this report was to assess the value of gait analysis in documenting recovery quantitatively. A professional football player underwent gait analysis before and after surgery for a peroneal intraneural ganglion cyst causing a left-sided foot drop. Surface electromyography (SEMG) recording from surface electrodes and motion parameter acquisition from a computerized motion capture system consisting of 10 infrared cameras were performed simultaneously. A comparison between SEMG recordings before and after surgery showed a progression from disorganized activation in the left tibialis anterior and peroneus longus muscles to temporally appropriate activation for the phase of the gait cycle. Kinematic analysis of ankle motion planes showed resolution from a complete foot drop preoperatively to phase-appropriate dorsiflexion postoperatively. Gait analysis with dynamic SEMG and motion capture complements physical examination when assessing postoperative recovery in athletes.
Discomfort Evaluation of Truck Ingress/Egress Motions Based on Biomechanical Analysis
Choi, Nam-Chul; Lee, Sang Hun
2015-01-01
This paper presents a quantitative discomfort evaluation method based on biomechanical analysis results for human body movement, as well as its application to an assessment of the discomfort for truck ingress and egress. In this study, the motions of a human subject entering and exiting truck cabins with different types, numbers, and heights of footsteps were first measured using an optical motion capture system and load sensors. Next, the maximum voluntary contraction (MVC) ratios of the muscles were calculated through a biomechanical analysis of the musculoskeletal human model for the captured motion. Finally, the objective discomfort was evaluated using the proposed discomfort model based on the MVC ratios. To validate this new discomfort assessment method, human subject experiments were performed to investigate the subjective discomfort levels through a questionnaire for comparison with the objective discomfort levels. The validation results showed that the correlation between the objective and subjective discomforts was significant and could be described by a linear regression model. PMID:26067194
A Soft Sensor-Based Three-Dimensional (3-D) Finger Motion Measurement System
Park, Wookeun; Ro, Kyongkwan; Kim, Suin; Bae, Joonbum
2017-01-01
In this study, a soft sensor-based three-dimensional (3-D) finger motion measurement system is proposed. The sensors, made of the soft material Ecoflex, comprise embedded microchannels filled with a conductive liquid metal (EGaln). The superior elasticity, light weight, and sensitivity of soft sensors allows them to be embedded in environments in which conventional sensors cannot. Complicated finger joints, such as the carpometacarpal (CMC) joint of the thumb are modeled to specify the location of the sensors. Algorithms to decouple the signals from soft sensors are proposed to extract the pure flexion, extension, abduction, and adduction joint angles. The performance of the proposed system and algorithms are verified by comparison with a camera-based motion capture system. PMID:28241414
Muscle forces analysis in the shoulder mechanism during wheelchair propulsion.
Lin, Hwai-Ting; Su, Fong-Chin; Wu, Hong-Wen; An, Kai-Nan
2004-01-01
This study combines an ergometric wheelchair, a six-camera video motion capture system and a prototype computer graphics based musculoskeletal model (CGMM) to predict shoulder joint loading, muscle contraction force per muscle and the sequence of muscular actions during wheelchair propulsion, and also to provide an animated computer graphics model of the relative interactions. Five healthy male subjects with no history of upper extremity injury participated. A conventional manual wheelchair was equipped with a six-component load cell to collect three-dimensional forces and moments experienced by the wheel, allowing real-time measurement of hand/rim force applied by subjects during normal wheelchair operation. An ExpertVision six-camera video motion capture system collected trajectory data of markers attached on anatomical positions. The CGMM was used to simulate and animate muscle action by using an optimization technique combining observed muscular motions with physiological constraints to estimate muscle contraction forces during wheelchair propulsion. The CGMM provides results that satisfactorily match the predictions of previous work, disregarding minor differences which presumably result from differing experimental conditions, measurement technologies and subjects. Specifically, the CGMM shows that the supraspinatus, infraspinatus, anterior deltoid, pectoralis major and biceps long head are the prime movers during the propulsion phase. The middle and posterior deltoid and supraspinatus muscles are responsible for arm return during the recovery phase. CGMM modelling shows that the rotator cuff and pectoralis major play an important role during wheelchair propulsion, confirming the known risk of injury for these muscles during wheelchair propulsion. The CGMM successfully transforms six-camera video motion capture data into a technically useful and visually interesting animated video model of the shoulder musculoskeletal system. The CGMM further yields accurate estimates of muscular forces during motion, indicating that this prototype modelling and analysis technique will aid in study, analysis and therapy of the mechanics and underlying pathomechanics involved in various musculoskeletal overuse syndromes.
Mjøsund, Hanne Leirbekk; Boyle, Eleanor; Kjaer, Per; Mieritz, Rune Mygind; Skallgård, Tue; Kent, Peter
2017-03-21
Wireless, wearable, inertial motion sensor technology introduces new possibilities for monitoring spinal motion and pain in people during their daily activities of work, rest and play. There are many types of these wireless devices currently available but the precision in measurement and the magnitude of measurement error from such devices is often unknown. This study investigated the concurrent validity of one inertial motion sensor system (ViMove) for its ability to measure lumbar inclination motion, compared with the Vicon motion capture system. To mimic the variability of movement patterns in a clinical population, a sample of 34 people were included - 18 with low back pain and 16 without low back pain. ViMove sensors were attached to each participant's skin at spinal levels T12 and S2, and Vicon surface markers were attached to the ViMove sensors. Three repetitions of end-range flexion inclination, extension inclination and lateral flexion inclination to both sides while standing were measured by both systems concurrently with short rest periods in between. Measurement agreement through the whole movement range was analysed using a multilevel mixed-effects regression model to calculate the root mean squared errors and the limits of agreement were calculated using the Bland Altman method. We calculated root mean squared errors (standard deviation) of 1.82° (±1.00°) in flexion inclination, 0.71° (±0.34°) in extension inclination, 0.77° (±0.24°) in right lateral flexion inclination and 0.98° (±0.69°) in left lateral flexion inclination. 95% limits of agreement ranged between -3.86° and 4.69° in flexion inclination, -2.15° and 1.91° in extension inclination, -2.37° and 2.05° in right lateral flexion inclination and -3.11° and 2.96° in left lateral flexion inclination. We found a clinically acceptable level of agreement between these two methods for measuring standing lumbar inclination motion in these two cardinal movement planes. Further research should investigate the ViMove system's ability to measure lumbar motion in more complex 3D functional movements and to measure changes of movement patterns related to treatment effects.
Markerless identification of key events in gait cycle using image flow.
Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn
2012-01-01
Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.
A novel spinal kinematic analysis using X-ray imaging and vicon motion analysis: a case study.
Noh, Dong K; Lee, Nam G; You, Joshua H
2014-01-01
This study highlights a novel spinal kinematic analysis method and the feasibility of X-ray imaging measurements to accurately assess thoracic spine motion. The advanced X-ray Nash-Moe method and analysis were used to compute the segmental range of motion in thoracic vertebra pedicles in vivo. This Nash-Moe X-ray imaging method was compared with a standardized method using the Vicon 3-dimensional motion capture system. Linear regression analysis showed an excellent and significant correlation between the two methods (R2 = 0.99, p < 0.05), suggesting that the analysis of spinal segmental range of motion using X-ray imaging measurements was accurate and comparable to the conventional 3-dimensional motion analysis system. Clinically, this novel finding is compelling evidence demonstrating that measurements with X-ray imaging are useful to accurately decipher pathological spinal alignment and movement impairments in idiopathic scoliosis (IS).
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
NASA Technical Reports Server (NTRS)
Gonda, Steve R. (Inventor); Tsao, Yow-Min D. (Inventor); Lee, Wenshan (Inventor)
2006-01-01
A gas-liquid separator uses a helical passageway to impart a spiral motion to a fluid passing therethrough. The centrifugal force generated by the spiraling motion urges the liquid component of the fluid radially outward which forces the gas component radially inward. The gas component is then separated through a gas-permeable, liquid-impervious membrane and discharged through a central passageway. A filter material captures target substances contained in the fluid.
Motion detection and compensation in infrared retinal image sequences.
Scharcanski, J; Schardosim, L R; Santos, D; Stuchi, A
2013-01-01
Infrared image data captured by non-mydriatic digital retinography systems often are used in the diagnosis and treatment of the diabetic macular edema (DME). Infrared illumination is less aggressive to the patient retina, and retinal studies can be carried out without pupil dilation. However, sequences of infrared eye fundus images of static scenes, tend to present pixel intensity fluctuations in time, and noisy and background illumination changes pose a challenge to most motion detection methods proposed in the literature. In this paper, we present a retinal motion detection method that is adaptive to background noise and illumination changes. Our experimental results indicate that this method is suitable for detecting retinal motion in infrared image sequences, and compensate the detected motion, which is relevant in retinal laser treatment systems for DME. Copyright © 2013 Elsevier Ltd. All rights reserved.
2017-04-01
The reporting of research in a manner that allows reproduction in subsequent investigations is important for scientific progress. Several details of the recent study by Patrizi et al., 'Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics', are absent from the published manuscript and make reproduction of findings impossible. As new and complex technologies with great promise for ergonomics develop, new but surmountable challenges for reporting investigations using these technologies in a reproducible manner arise. Practitioner Summary: As with traditional methods, scientific reporting of new and complex ergonomics technologies should be performed in a manner that allows reproduction in subsequent investigations and supports scientific advancement.
A low cost real-time motion tracking approach using webcam technology.
Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh
2015-02-05
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.
A low cost real-time motion tracking approach using webcam technology
Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh
2014-01-01
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306
Quantitative evaluation of toothbrush and arm-joint motion during tooth brushing.
Inada, Emi; Saitoh, Issei; Yu, Yong; Tomiyama, Daisuke; Murakami, Daisuke; Takemoto, Yoshihiko; Morizono, Ken; Iwasaki, Tomonori; Iwase, Yoko; Yamasaki, Youichi
2015-07-01
It is very difficult for dental professionals to objectively assess tooth brushing skill of patients, because an obvious index to assess the brushing motion of patients has not been established. The purpose of this study was to quantitatively evaluate toothbrush and arm-joint motion during tooth brushing. Tooth brushing motion, performed by dental hygienists for 15 s, was captured using a motion-capture system that continuously calculates the three-dimensional coordinates of object's motion relative to the floor. The dental hygienists performed the tooth brushing on the buccal and palatal sides of their right and left upper molars. The frequencies and power spectra of toothbrush motion and joint angles of the shoulder, elbow, and wrist were calculated and analyzed statistically. The frequency of toothbrush motion was higher on the left side (both buccal and palatal areas) than on the right side. There were no significant differences among joint angle frequencies within each brushing area. The inter- and intra-individual variations of the power spectrum of the elbow flexion angle when brushing were smaller than for any of the other angles. This study quantitatively confirmed that dental hygienists have individual distinctive rhythms during tooth brushing. All arm joints moved synchronously during brushing, and tooth brushing motion was controlled by coordinated movement of the joints. The elbow generated an individual's frequency through a stabilizing movement. The shoulder and wrist control the hand motion, and the elbow generates the cyclic rhythm during tooth brushing.
Venkataraman, Vinay; Turaga, Pavan; Baran, Michael; Lehrer, Nicole; Du, Tingfang; Cheng, Long; Rikakis, Thanassis; Wolf, Steven L.
2016-01-01
In this paper, we propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. We propose a linear combination of non-linear kinematic features to model wrist movement, and propose an approach to learn feature thresholds and weights using high-level labels of overall movement quality provided by a therapist. The kinematic features are chosen such that they correlate with the quality of wrist movements to clinical assessment scores. Further, the proposed features are designed to be reliably extracted from an inexpensive and portable motion capture system using a single reflective marker on the wrist. Using a dataset collected from ten stroke survivors, we demonstrate that the framework can be reliably used for movement quality assessment in HAMRR systems. The system is currently being deployed for large-scale evaluations, and will represent an increasingly important application area of motion capture and activity analysis. PMID:25438331
Vision-based system identification technique for building structures using a motion capture system
NASA Astrophysics Data System (ADS)
Oh, Byung Kwan; Hwang, Jin Woo; Kim, Yousok; Cho, Tongjun; Park, Hyo Seon
2015-11-01
This paper presents a new vision-based system identification (SI) technique for building structures by using a motion capture system (MCS). The MCS with outstanding capabilities for dynamic response measurements can provide gage-free measurements of vibrations through the convenient installation of multiple markers. In this technique, from the dynamic displacement responses measured by MCS, the dynamic characteristics (natural frequency, mode shape, and damping ratio) of building structures are extracted after the processes of converting the displacement from MCS to acceleration and conducting SI by frequency domain decomposition. A free vibration experiment on a three-story shear frame was conducted to validate the proposed technique. The SI results from the conventional accelerometer-based method were compared with those from the proposed technique and showed good agreement, which confirms the validity and applicability of the proposed vision-based SI technique for building structures. Furthermore, SI directly employing MCS measured displacements to FDD was performed and showed identical results to those of conventional SI method.
NASA Astrophysics Data System (ADS)
Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc
2012-12-01
In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.
Perceived shifts of flashed stimuli by visible and invisible object motion.
Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke
2003-01-01
Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.
Sigalov, G; Gendelman, O V; AL-Shudeifat, M A; Manevitch, L I; Vakakis, A F; Bergman, L A
2012-03-01
We show that nonlinear inertial coupling between a linear oscillator and an eccentric rotator can lead to very interesting interchanges between regular and chaotic dynamical behavior. Indeed, we show that this model demonstrates rather unusual behavior from the viewpoint of nonlinear dynamics. Specifically, at a discrete set of values of the total energy, the Hamiltonian system exhibits non-conventional nonlinear normal modes, whose shape is determined by phase locking of rotatory and oscillatory motions of the rotator at integer ratios of characteristic frequencies. Considering the weakly damped system, resonance capture of the dynamics into the vicinity of these modes brings about regular motion of the system. For energy levels far from these discrete values, the motion of the system is chaotic. Thus, the succession of resonance captures and escapes by a discrete set of the normal modes causes a sequence of transitions between regular and chaotic behavior, provided that the damping is sufficiently small. We begin from the Hamiltonian system and present a series of Poincaré sections manifesting the complex structure of the phase space of the considered system with inertial nonlinear coupling. Then an approximate analytical description is presented for the non-conventional nonlinear normal modes. We confirm the analytical results by numerical simulation and demonstrate the alternate transitions between regular and chaotic dynamics mentioned above. The origin of the chaotic behavior is also discussed.
The KIT Motion-Language Dataset.
Plappert, Matthias; Mandery, Christian; Asfour, Tamim
2016-12-01
Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.
Guess, Trent M; Razu, Swithin; Jahandar, Amirhossein; Skubic, Marjorie; Huo, Zhiyu
2017-04-01
The Microsoft Kinect is becoming a widely used tool for inexpensive, portable measurement of human motion, with the potential to support clinical assessments of performance and function. In this study, the relative osteokinematic Cardan joint angles of the hip and knee were calculated using the Kinect 2.0 skeletal tracker. The pelvis segments of the default skeletal model were reoriented and 3-dimensional joint angles were compared with a marker-based system during a drop vertical jump and a hip abduction motion. Good agreement between the Kinect and marker-based system were found for knee (correlation coefficient = 0.96, cycle RMS error = 11°, peak flexion difference = 3°) and hip (correlation coefficient = 0.97, cycle RMS = 12°, peak flexion difference = 12°) flexion during the landing phase of the drop vertical jump and for hip abduction/adduction (correlation coefficient = 0.99, cycle RMS error = 7°, peak flexion difference = 8°) during isolated hip motion. Nonsagittal hip and knee angles did not correlate well for the drop vertical jump. When limited to activities in the optimal capture volume and with simple modifications to the skeletal model, the Kinect 2.0 skeletal tracker can provide limited 3-dimensional kinematic information of the lower limbs that may be useful for functional movement assessment.
Relative effects of posture and activity on human height estimation from surveillance footage.
Ramstrand, Nerrolyn; Ramstrand, Simon; Brolund, Per; Norell, Kristin; Bergström, Peter
2011-10-10
Height estimations based on security camera footage are often requested by law enforcement authorities. While valid and reliable techniques have been established to determine vertical distances from video frames, there is a discrepancy between a person's true static height and their height as measured when assuming different postures or when in motion (e.g., walking). The aim of the research presented in this report was to accurately record the height of subjects as they performed a variety of activities typically observed in security camera footage and compare results to height recorded using a standard height measuring device. Forty-six able bodied adults participated in this study and were recorded using a 3D motion analysis system while performing eight different tasks. Height measurements captured using the 3D motion analysis system were compared to static height measurements in order to determine relative differences. It is anticipated that results presented in this report can be used by forensic image analysis experts as a basis for correcting height estimations of people captured on surveillance footage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Synchronizing MIDI and wireless EEG measurements during natural piano performance.
Zamm, Anna; Palmer, Caroline; Bauer, Anna-Katharina R; Bleichner, Martin G; Demos, Alexander P; Debener, Stefan
2017-07-08
Although music performance has been widely studied in the behavioural sciences, less work has addressed the underlying neural mechanisms, perhaps due to technical difficulties in acquiring high-quality neural data during tasks requiring natural motion. The advent of wireless electroencephalography (EEG) presents a solution to this problem by allowing for neural measurement with minimal motion artefacts. In the current study, we provide the first validation of a mobile wireless EEG system for capturing the neural dynamics associated with piano performance. First, we propose a novel method for synchronously recording music performance and wireless mobile EEG. Second, we provide results of several timing tests that characterize the timing accuracy of our system. Finally, we report EEG time domain and frequency domain results from N=40 pianists demonstrating that wireless EEG data capture the unique temporal signatures of musicians' performances with fine-grained precision and accuracy. Taken together, we demonstrate that mobile wireless EEG can be used to measure the neural dynamics of piano performance with minimal motion constraints. This opens many new possibilities for investigating the brain mechanisms underlying music performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Modelling of the Human Knee Joint Supported by Active Orthosis
NASA Astrophysics Data System (ADS)
Musalimov, V.; Monahov, Y.; Tamre, M.; Rõbak, D.; Sivitski, A.; Aryassov, G.; Penkov, I.
2018-02-01
The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC). The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Chen, Y C; Lee, H J; Lin, K H
2015-08-01
Range of motion (ROM) is commonly used to assess a patient's joint function in physical therapy. Because motion capture systems are generally very expensive, physical therapists mostly use simple rulers to measure patients' joint angles in clinical diagnosis, which will suffer from low accuracy, low reliability, and subjective. In this study we used color and depth image feature from two sets of low-cost Microsoft Kinect to reconstruct 3D joint positions, and then calculate moveable joint angles to assess the ROM. A Gaussian background model is first used to segment the human body from the depth images. The 3D coordinates of the joints are reconstructed from both color and depth images. To track the location of joints throughout the sequence more precisely, we adopt the mean shift algorithm to find out the center of voxels upon the joints. The two sets of Kinect are placed three meters away from each other and facing to the subject. The joint moveable angles and the motion data are calculated from the position of joints frame by frame. To verify the results of our system, we take the results from a motion capture system called VICON as golden standard. Our 150 test results showed that the deviation of joint moveable angles between those obtained by VICON and our system is about 4 to 8 degree in six different upper limb exercises, which are acceptable in clinical environment.
Evaluation of the Microsoft Kinect as a clinical assessment tool of body sway.
Yeung, L F; Cheng, Kenneth C; Fong, C H; Lee, Winson C C; Tong, Kai-Yu
2014-09-01
Total body center of mass (TBCM) is a useful kinematic measurement of body sway. However, expensive equipment and high technical requirement limit the use of motion capture systems in large-scale clinical settings. Center of pressure (CP) measurement obtained from force plates cannot accurately represent TBCM during large body sway movement. Microsoft Kinect is a rapidly developing, inexpensive, and portable posturographic device, which provides objective and quantitative measurement of TBCM sway. The purpose of this study was to evaluate Kinect as a clinical assessment tool for TBCM sway measurement. The performance of the Kinect system was compared with a Vicon motion capture system and a force plate. Ten healthy male subjects performed four upright quiet standing tasks: (1) eyes open (EOn), (2) eyes closed (ECn), (3) eyes open standing on foam (EOf), and (4) eyes closed standing on foam (ECf). Our results revealed that the Kinect system produced highly correlated measurement of TBCM sway (mean RMSE=4.38 mm; mean CORR=0.94 in Kinect-Vicon comparison), as well as comparable intra-session reliability to Vicon. However, the Kinect device consistently overestimated the 95% CL of sway by about 3mm. This offset could be due to the limited accuracy, resolution, and sensitivity of the Kinect sensors. The Kinect device was more accurate in the medial-lateral than in the anterior-posterior direction, and performed better than the force plate in more challenging balance tasks, such as (ECf) with larger TBCM sway. Overall, Kinect is a cost-effective alternative to a motion capture and force plate system for clinical assessment of TBCM sway. Copyright © 2014 Elsevier B.V. All rights reserved.
Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian
2017-12-06
The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p < 0.01) and significant and moderate to high (r = 0.552, 0.860, p < 0.01). The Myotest could only be used during the ballistic squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.
Verification and compensation of respiratory motion using an ultrasound imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuang, Ho-Chiao, E-mail: hchuang@mail.ntut.edu.tw; Hsu, Hsiao-Yu; Chiu, Wei-Hung
Purpose: The purpose of this study was to determine if it is feasible to use ultrasound imaging as an aid for moving the treatment couch during diagnosis and treatment procedures associated with radiation therapy, in order to offset organ displacement caused by respiratory motion. A noninvasive ultrasound system was used to replace the C-arm device during diagnosis and treatment with the aims of reducing the x-ray radiation dose on the human body while simultaneously being able to monitor organ displacements. Methods: This study used a proposed respiratory compensating system combined with an ultrasound imaging system to monitor the compensation effectmore » of respiratory motion. The accuracy of the compensation effect was verified by fluoroscopy, which means that fluoroscopy could be replaced so as to reduce unnecessary radiation dose on patients. A respiratory simulation system was used to simulate the respiratory motion of the human abdomen and a strain gauge (respiratory signal acquisition device) was used to capture the simulated respiratory signals. The target displacements could be detected by an ultrasound probe and used as a reference for adjusting the gain value of the respiratory signal used by the respiratory compensating system. This ensured that the amplitude of the respiratory compensation signal was a faithful representation of the target displacement. Results: The results show that performing respiratory compensation with the assistance of the ultrasound images reduced the compensation error of the respiratory compensating system to 0.81–2.92 mm, both for sine-wave input signals with amplitudes of 5, 10, and 15 mm, and human respiratory signals; this represented compensation of the respiratory motion by up to 92.48%. In addition, the respiratory signals of 10 patients were captured in clinical trials, while their diaphragm displacements were observed simultaneously using ultrasound. Using the respiratory compensating system to offset, the diaphragm displacement resulted in compensation rates of 60%–84.4%. Conclusions: This study has shown that a respiratory compensating system combined with noninvasive ultrasound can provide real-time compensation of the respiratory motion of patients.« less
Automated video-based assessment of surgical skills for training and evaluation in medical schools.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan
2016-09-01
Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.
Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?
Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.
2016-01-01
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676
Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?
Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H
2016-12-15
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.
NASA Astrophysics Data System (ADS)
Kidambi, Narayanan; Harne, Ryan L.; Wang, K. W.
2017-08-01
The remarkable versatility and adaptability of skeletal muscle that arises from the assembly of its nanoscale cross-bridges into micro-scale assemblies known as sarcomeres provides great inspiration for the development of advanced adaptive structures and material systems. Motivated by the capability of cross-bridges to capture elastic strain energy to improve the energetic efficiency of sudden movements and repeated motions, and by models of cross-bridge power stroke motions and sarcomere contractile behaviors that incorporate asymmetric, bistable potential energy landscapes, this research develops and studies modular mechanical structures that trap and store energy in higher-energy configurations. Modules exhibiting tailorable asymmetric bistability are first designed and fabricated, revealing how geometric parameters influence the asymmetry of the resulting double-well energy landscapes. These experimentally-observed characteristics are then investigated with numerical and analytical methods to characterize the dynamics of asymmetrically bistable modules. The assembly of such modules into greater structures generates complex, multi-well energy landscapes with stable system configurations exhibiting different quantities of stored elastic potential energy. Dynamic analyses illustrate the ability of these structures to capture a portion of the initial kinetic energy due to impulsive excitations as recoverable strain potential energy, and reveal how stiffness parameters, damping, and the presence of thermal noise in micro- and nano-scale applications influence energy capture behaviors. The insights gained could foster the development of advanced structural/material systems inspired by skeletal muscle, including actuators that effectively capture, store, and release energy, as well as adaptive, robust, and reusable armors and protective devices.
Handmade Task Tracking Applied to Cognitive Rehabilitation
Cogollor, José M.; Hughes, Charmayne; Ferre, Manuel; Rojo, Javier; Hermsdörfer, Joachim; Wing, Alan; Campo, Sandra
2012-01-01
This article presents research focused on tracking manual tasks that are applied in cognitive rehabilitation so as to analyze the movements of patients who suffer from Apraxia and Action Disorganization Syndrome (AADS). This kind of patients find executing Activities of Daily Living (ADL) too difficult due to the loss of memory and capacity to carry out sequential tasks or the impossibility of associating different objects with their functions. This contribution is developed from the work of Universidad Politécnica de Madrid and Technical University of Munich in collaboration with The University of Birmingham. The Kinect™ for Windows© device is used for this purpose. The data collected is compared to an ultrasonic motion capture system. The results indicate a moderate to strong correlation between signals. They also verify that Kinect™ is very suitable and inexpensive. Moreover, it turns out to be a motion-capture system quite easy to implement for kinematics analysis in ADL. PMID:23202045
Averaging, passage through resonances, and capture into resonance in two-frequency systems
NASA Astrophysics Data System (ADS)
Neishtadt, A. I.
2014-10-01
Applying small perturbations to an integrable system leads to its slow evolution. For an approximate description of this evolution the classical averaging method prescribes averaging the rate of evolution over all the phases of the unperturbed motion. This simple recipe does not always produce correct results, because of resonances arising in the process of evolution. The phenomenon of capture into resonance consists in the system starting to evolve in such a way as to preserve the resonance property once it has arisen. This paper is concerned with application of the averaging method to a description of evolution in two-frequency systems. It is assumed that the trajectories of the averaged system intersect transversally the level surfaces of the frequency ratio and that certain other conditions of general position are satisfied. The rate of evolution is characterized by a small parameter \\varepsilon. The main content of the paper is a proof of the following result: outside a set of initial data with measure of order \\sqrt \\varepsilon the averaging method describes the evolution to within O(\\sqrt \\varepsilon \\vert\\ln\\varepsilon\\vert) for periods of time of order 1/\\varepsilon. This estimate is sharp. The exceptional set of measure \\sqrt \\varepsilon contains the initial data for phase points captured into resonance. A description of the motion of such phase points is given, along with a survey of related results on averaging. Examples of capture into resonance are presented for some problems in the dynamics of charged particles. Several open problems are stated. Bibliography: 65 titles.
Asynchronous beating of cilia enhances particle capture rate
NASA Astrophysics Data System (ADS)
Ding, Yang; Kanso, Eva
2014-11-01
Many aquatic micro-organisms use beating cilia to generate feeding currents and capture particles in surrounding fluids. One of the capture strategies is to ``catch up'' with particles when a cilium is beating towards the overall flow direction (effective stroke) and intercept particles on the downstream side of the cilium. Here, we developed a 3D computational model of a cilia band with prescribed motion in a viscous fluid and calculated the trajectories of the particles with different sizes in the fluid. We found an optimal particle diameter that maximizes the capture rate. The flow field and particle motion indicate that the low capture rate of smaller particles is due to the laminar flow in the neighbor of the cilia, whereas larger particles have to move above the cilia tips to get advected downstream which decreases their capture rate. We then analyzed the effect of beating coordination between neighboring cilia on the capture rate. Interestingly, we found that asynchrony of the beating of the cilia can enhance the relative motion between a cilium and the particles near it and hence increase the capture rate.
Evaluation of the Microsoft Kinect for screening ACL injury.
Stone, Erik E; Butler, Michael; McRuer, Aaron; Gray, Aaron; Marks, Jeffrey; Skubic, Marjorie
2013-01-01
A study was conducted to evaluate the use of the skeletal model generated by the Microsoft Kinect SDK in capturing four biomechanical measures during the Drop Vertical Jump test. These measures, which include: knee valgus motion from initial contact to peak flexion, frontal plane knee angle at initial contact, frontal plane knee angle at peak flexion, and knee-to-ankle separation ratio at peak flexion, have proven to be useful in screening for future knee anterior cruciate ligament (ACL) injuries among female athletes. A marker-based Vicon motion capture system was used for ground truth. Results indicate that the Kinect skeletal model likely has acceptable accuracy for use as part of a screening tool to identify elevated risk for ACL injury.
Retell, James D; Becker, Stefanie I; Remington, Roger W
2016-01-01
An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3-6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations.
Dhont, Jennifer; Vandemeulebroucke, Jef; Burghelea, Manuela; Poels, Kenneth; Depuydt, Tom; Van Den Begin, Robbe; Jaudet, Cyril; Collen, Christine; Engels, Benedikt; Reynders, Truus; Boussaer, Marlies; Gevaert, Thierry; De Ridder, Mark; Verellen, Dirk
2018-02-01
To evaluate the short and long-term variability of breathing induced tumor motion. 3D tumor motion of 19 lung and 18 liver lesions captured over the course of an SBRT treatment were evaluated and compared to the motion on 4D-CT. An implanted fiducial could be used for unambiguous motion information. Fast orthogonal fluoroscopy (FF) sequences, included in the treatment workflow, were used to evaluate motion during treatment. Several motion parameters were compared between different FF sequences from the same fraction to evaluate the intrafraction variability. To assess interfraction variability, amplitude and hysteresis were compared between fractions and with the 3D tumor motion registered by 4D-CT. Population based margins, necessary on top of the ITV to capture all motion variability, were calculated based on the motion captured during treatment. Baseline drift in the cranio-caudal (CC) or anterior-poster (AP) direction is significant (ie. >5 mm) for a large group of patients, in contrary to intrafraction amplitude and hysteresis variability. However, a correlation between intrafraction amplitude variability and mean motion amplitude was found (Pearson's correlation coefficient, r = 0.72, p < 10 -4 ). Interfraction variability in amplitude is significant for 46% of all lesions. As such, 4D-CT accurately captures the motion during treatment for some fractions but not for all. Accounting for motion variability during treatment increases the PTV margins in all directions, most significantly in CC from 5 mm to 13.7 mm for lung and 8.0 mm for liver. Both short-term and day-to-day tumor motion variability can be significant, especially for lesions moving with amplitudes above 7 mm. Abandoning passive motion management strategies in favor of more active ones is advised. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Santos, C. Almeida; Costa, C. Oliveira; Batista, J.
2016-05-01
The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.
Takano, Wataru; Kusajima, Ikuo; Nakamura, Yoshihiko
2016-08-01
It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Natural Interaction Based Online Military Boxing Learning System
ERIC Educational Resources Information Center
Yang, Chenglei; Wang, Lu; Sun, Bing; Yin, Xu; Wang, Xiaoting; Liu, Li; Lu, Lin
2013-01-01
Military boxing, a kind of Chinese martial arts, is widespread and health beneficial. In this paper, the authors introduce a military boxing learning system realized by 3D motion capture, Web3D and 3D interactive technologies. The interactions with the system are natural and intuitive. Users can observe and learn the details of each action of the…
Scalable sensing electronics towards a motion capture suit
NASA Astrophysics Data System (ADS)
Xu, Daniel; Gisby, Todd A.; Xie, Shane; Anderson, Iain A.
2013-04-01
Being able to accurately record body motion allows complex movements to be characterised and studied. This is especially important in the film or sport coaching industry. Unfortunately, the human body has over 600 skeletal muscles, giving rise to multiple degrees of freedom. In order to accurately capture motion such as hand gestures, elbow or knee flexion and extension, vast numbers of sensors are required. Dielectric elastomer (DE) sensors are an emerging class of electroactive polymer (EAP) that is soft, lightweight and compliant. These characteristics are ideal for a motion capture suit. One challenge is to design sensing electronics that can simultaneously measure multiple sensors. This paper describes a scalable capacitive sensing device that can measure up to 8 different sensors with an update rate of 20Hz.
Robot arm system for automatic satellite capture and berthing
NASA Technical Reports Server (NTRS)
Nishida, Shinichiro; Toriu, Hidetoshi; Hayashi, Masato; Kubo, Tomoaki; Miyata, Makoto
1994-01-01
Load control is one of the most important technologies for capturing and berthing free flying satellites by a space robot arm because free flying satellites have different motion rates. The performance of active compliance control techniques depend on the location of the force sensor and the arm's structural compliance. A compliance control technique for the robot arm's structural elasticity and a consideration for an end-effector appropriate for it are presented in this paper.
Faber, G S; Chang, C C; Kingma, I; Dennerlein, J T; van Dieën, J H
2016-04-11
Inertial motion capture (IMC) systems have become increasingly popular for ambulatory movement analysis. However, few studies have attempted to use these measurement techniques to estimate kinetic variables, such as joint moments and ground reaction forces (GRFs). Therefore, we investigated the performance of a full-body ambulatory IMC system in estimating 3D L5/S1 moments and GRFs during symmetric, asymmetric and fast trunk bending, performed by nine male participants. Using an ambulatory IMC system (Xsens/MVN), L5/S1 moments were estimated based on the upper-body segment kinematics using a top-down inverse dynamics analysis, and GRFs were estimated based on full-body segment accelerations. As a reference, a laboratory measurement system was utilized: GRFs were measured with Kistler force plates (FPs), and L5/S1 moments were calculated using a bottom-up inverse dynamics model based on FP data and lower-body kinematics measured with an optical motion capture system (OMC). Correspondence between the OMC+FP and IMC systems was quantified by calculating root-mean-square errors (RMSerrors) of moment/force time series and the interclass correlation (ICC) of the absolute peak moments/forces. Averaged over subjects, L5/S1 moment RMSerrors remained below 10Nm (about 5% of the peak extension moment) and 3D GRF RMSerrors remained below 20N (about 2% of the peak vertical force). ICCs were high for the peak L5/S1 extension moment (0.971) and vertical GRF (0.998). Due to lower amplitudes, smaller ICCs were found for the peak asymmetric L5/S1 moments (0.690-0.781) and horizontal GRFs (0.559-0.948). In conclusion, close correspondence was found between the ambulatory IMC-based and laboratory-based estimates of back load. Copyright © 2015 Elsevier Ltd. All rights reserved.
Xiao, Xiao; Li, Wei; Clawson, Corbin; Karvani, David; Sondag, Perceval; Hahn, James K
2018-01-01
The study aimed to develop a motion capture system that can track, visualize, and analyze the entire performance of self-injection with the auto-injector. Each of nine healthy subjects and 29 rheumatoid arthritic (RA) patients with different degrees of hand disability performed two simulated injections into an injection pad while six degrees of freedom (DOF) motions of the auto-injector and the injection pad were captured. We quantitatively measured the performance of the injection by calculating needle displacement from the motion trajectories. The max, mean, and SD of needle displacement were analyzed. Assessments of device acceptance and usability were evaluated by a survey questionnaire and independent observations of compliance with the device instruction for use (IFU). A total of 80 simulated injections were performed. Our results showed a similar level of performance among all the subjects with slightly larger, but not statistically significant, needle displacement in the RA group. In particular, no significant effects regarding previous experience in self-injection, grip method, pain in hand, and Cochin score in the RA group were found to have an impact on the mean needle displacement. Moreover, the analysis of needle displacement for different durations of injections indicated that most of the subjects reached their personal maximum displacement in 15 seconds and remained steady or exhibited a small amount of increase from 15 to 60 seconds. Device acceptance was high for most of the questions (ie, >4; >80%) based on a 0-5-point scale or percentage of acceptance. The overall compliance with the device IFU was high for the first injection (96.05%) and reached 98.02% for the second injection. We demonstrated the feasibility of tracking the motions of injection to measure the performance of simulated self-injection. The comparisons of needle displacement showed that even RA patients with severe hand disability could properly perform self-injection with this auto-injector at a similar level with the healthy subjects. Finally, the observed high device acceptance and compliance with device IFU suggest that the system is convenient and easy to use.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
In-vivo confirmation of the use of the dart thrower's motion during activities of daily living.
Brigstocke, G H O; Hearnden, A; Holt, C; Whatling, G
2014-05-01
The dart thrower's motion is a wrist rotation along an oblique plane from radial extension to ulnar flexion. We report an in-vivo study to confirm the use of the dart thrower's motion during activities of daily living. Global wrist motion in ten volunteers was recorded using a three-dimensional optoelectronic motion capture system, in which digital infra-red cameras track the movement of retro-reflective marker clusters. Global wrist motion has been approximated to the dart thrower's motion when hammering a nail, throwing a ball, drinking from a glass, pouring from a jug and twisting the lid of a jar, but not when combing hair or manipulating buttons. The dart thrower's motion is the plane of global wrist motion used during most activities of daily living. Arthrodesis of the radiocarpal joint instead of the midcarpal joint will allow better wrist function during most activities of daily living by preserving the dart thrower's motion.
A three-dimensional autonomous nonlinear dynamical system modelling equatorial ocean flows
NASA Astrophysics Data System (ADS)
Ionescu-Kruse, Delia
2018-04-01
We investigate a nonlinear three-dimensional model for equatorial flows, finding exact solutions that capture the most relevant geophysical features: depth-dependent currents, poleward or equatorial surface drift and a vertical mixture of upward and downward motions.
Interaction of Perceptual Grouping and Crossmodal Temporal Capture in Tactile Apparent-Motion
Chen, Lihan; Shi, Zhuanghua; Müller, Hermann J.
2011-01-01
Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can “capture” visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left- or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from −75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs—one short (75 ms), one long (325 ms)—were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects. PMID:21383834
Trajectory of coronary motion and its significance in robotic motion cancellation.
Cattin, Philippe; Dave, Hitendu; Grünenfelder, Jürg; Szekely, Gabor; Turina, Marko; Zünd, Gregor
2004-05-01
To characterize remaining coronary artery motion of beating pig hearts after stabilization with an 'Octopus' using an optical remote analysis technique. Three pigs (40, 60 and 65 kg) underwent full sternotomy after receiving general anesthesia. An 8-bit high speed black and white video camera (50 frames/s) coupled with a laser sensor (60 microm resolution) were used to capture heart wall motion in all three dimensions. Dopamine infusion was used to deliberately modulate cardiac contractility. Synchronized ECG, blood pressure, airway pressure and video data of the region around the first branching point of the left anterior descending (LAD) coronary artery after Octopus stabilization were captured for stretches of 8 s each. Several sequences of the same region were captured over a period of several minutes. Computerized off-line analysis allowed us to perform minute characterization of the heart wall motion. The movement of the points of interest on the LAD ranged from 0.22 to 0.81 mm in the lateral plane (x/y-axis) and 0.5-2.6 mm out of the plane (z-axis). Fast excursions (>50 microm/s in the lateral plane) occurred corresponding to the QRS complex and the T wave; while slow excursion phases (<50 microm/s in the lateral plane) were observed during the P wave and the ST segment. The trajectories of the points of interest during consecutive cardiac cycles as well as during cardiac cycles minutes apart remained comparable (the differences were negligible), provided the hemodynamics remained stable. Inotrope-induced changes in cardiac contractility influenced not only the maximum excursion, but also the shape of the trajectory. Normal positive pressure ventilation displacing the heart in the thoracic cage was evident by the displacement of the reference point of the trajectory. The movement of the coronary artery after stabilization appears to be still significant. Minute characterization of the trajectory of motion could provide the substrate for achieving motion cancellation for existing robotic systems. Velocity plots could also help improve gated cardiac imaging.
Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People
NASA Astrophysics Data System (ADS)
Thomik, Andreas; Faisal, A. Aldo
2015-03-01
Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).
Multimodal Speech Capture System for Speech Rehabilitation and Learning.
Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam
2017-11-01
Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.
High-resolution Doppler model of the human gait
NASA Astrophysics Data System (ADS)
Geisheimer, Jonathan L.; Greneker, Eugene F., III; Marshall, William S.
2002-07-01
A high resolution Doppler model of the walking human was developed for analyzing the continuous wave (CW) radar gait signature. Data for twenty subjects were collected simultaneously using an infrared motion capture system along with a two channel 10.525 GHz CW radar. The motion capture system recorded three-dimensional coordinates of infrared markers placed on the body. These body marker coordinates were used as inputs to create the theoretical Doppler output using a model constructed in MATLAB. The outputs of the model are the simulated Doppler signals due to each of the major limbs and the thorax. An estimated radar cross section for each part of the body was assigned using the Lund & Browder chart of estimated body surface area. The resultant Doppler model was then compared with the actual recorded Doppler gait signature in the frequency domain using the spectrogram. Comparison of the two sets of data has revealed several identifiable biomechanical features in the radar gait signature due to leg and body motion. The result of the research shows that a wealth of information can be unlocked from the radar gait signature, which may be useful in security and biometric applications.
Two-particle microrheology of quasi-2D viscous systems.
Prasad, V; Koehler, S A; Weeks, Eric R
2006-10-27
We study the spatially correlated motions of colloidal particles in a quasi-2D system (human serum albumin protein molecules at an air-water interface) for different surface viscosities eta s. We observe a transition in the behavior of the correlated motion, from 2D interface dominated at high eta s to bulk fluid dependent at low eta s. The correlated motions can be scaled onto a master curve which captures the features of this transition. This master curve also characterizes the spatial dependence of the flow field of a viscous interface in response to a force. The scale factors used for the master curve allow for the calculation of the surface viscosity eta s that can be compared to one-particle measurements.
Dynamic Metasurface Aperture as Smart Around-the-Corner Motion Detector.
Del Hougne, Philipp; F Imani, Mohammadreza; Sleasman, Timothy; Gollub, Jonah N; Fink, Mathias; Lerosey, Geoffroy; Smith, David R
2018-04-25
Detecting and analysing motion is a key feature of Smart Homes and the connected sensor vision they embrace. At present, most motion sensors operate in line-of-sight Doppler shift schemes. Here, we propose an alternative approach suitable for indoor environments, which effectively constitute disordered cavities for radio frequency (RF) waves; we exploit the fundamental sensitivity of modes of such cavities to perturbations, caused here by moving objects. We establish experimentally three key features of our proposed system: (i) ability to capture the temporal variations of motion and discern information such as periodicity ("smart"), (ii) non line-of-sight motion detection, and (iii) single-frequency operation. Moreover, we explain theoretically and demonstrate experimentally that the use of dynamic metasurface apertures can substantially enhance the performance of RF motion detection. Potential applications include accurately detecting human presence and monitoring inhabitants' vital signs.
Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L
2009-06-15
Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.
Pupil Tracking for Real-Time Motion Corrected Anterior Segment Optical Coherence Tomography
Carrasco-Zevallos, Oscar M.; Nankivil, Derek; Viehland, Christian; Keller, Brenton; Izatt, Joseph A.
2016-01-01
Volumetric acquisition with anterior segment optical coherence tomography (ASOCT) is necessary to obtain accurate representations of the tissue structure and to account for asymmetries of the anterior eye anatomy. Additionally, recent interest in imaging of anterior segment vasculature and aqueous humor flow resulted in application of OCT angiography techniques to generate en face and 3D micro-vasculature maps of the anterior segment. Unfortunately, ASOCT structural and vasculature imaging systems do not capture volumes instantaneously and are subject to motion artifacts due to involuntary eye motion that may hinder their accuracy and repeatability. Several groups have demonstrated real-time tracking for motion-compensated in vivo OCT retinal imaging, but these techniques are not applicable in the anterior segment. In this work, we demonstrate a simple and low-cost pupil tracking system integrated into a custom swept-source OCT system for real-time motion-compensated anterior segment volumetric imaging. Pupil oculography hardware coaxial with the swept-source OCT system enabled fast detection and tracking of the pupil centroid. The pupil tracking ASOCT system with a field of view of 15 x 15 mm achieved diffraction-limited imaging over a lateral tracking range of +/- 2.5 mm and was able to correct eye motion at up to 22 Hz. Pupil tracking ASOCT offers a novel real-time motion compensation approach that may facilitate accurate and reproducible anterior segment imaging. PMID:27574800
Design and implementation of modular home security system with short messaging system
NASA Astrophysics Data System (ADS)
Budijono, Santoso; Andrianto, Jeffri; Axis Novradin Noor, Muhammad
2014-03-01
Today we are living in 21st century where crime become increasing and everyone wants to secure they asset at their home. In that situation user must have system with advance technology so person do not worry when getting away from his home. It is therefore the purpose of this design to provide home security device, which send fast information to user GSM (Global System for Mobile) mobile device using SMS (Short Messaging System) and also activate - deactivate system by SMS. The Modular design of this Home Security System make expandable their capability by add more sensors on that system. Hardware of this system has been designed using microcontroller AT Mega 328, PIR (Passive Infra Red) motion sensor as the primary sensor for motion detection, camera for capturing images, GSM module for sending and receiving SMS and buzzer for alarm. For software this system using Arduino IDE for Arduino and Putty for testing connection programming in GSM module. This Home Security System can monitor home area that surrounding by PIR sensor and sending SMS, save images capture by camera, and make people panic by turn on the buzzer when trespassing surrounding area that detected by PIR sensor. The Modular Home Security System has been tested and succeed detect human movement.
NASA Astrophysics Data System (ADS)
Thienphrapa, Paul; Ramachandran, Bharat; Elhawary, Haytham; Taylor, Russell H.; Popovic, Aleksandra
2012-02-01
Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass, and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm. We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of the fragment and securing it upon its reentry to said location. To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively to select the best capture location based on constraints such as workspace, time, and device manipulability. Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable capability in an interventional system.
Quality control procedures for dynamic treatment delivery techniques involving couch motion.
Yu, Victoria Y; Fahimian, Benjamin P; Xing, Lei; Hristov, Dimitre H
2014-08-01
In this study, the authors introduce and demonstrate quality control procedures for evaluating the geometric and dosimetric fidelity of dynamic treatment delivery techniques involving treatment couch motion synchronous with gantry and multileaf collimator (MLC). Tests were designed to evaluate positional accuracy, velocity constancy and accuracy for dynamic couch motion under a realistic weight load. A test evaluating the geometric accuracy of the system in delivering treatments over complex dynamic trajectories was also devised. Custom XML scripts that control the Varian TrueBeam™ STx (Serial #3) axes in Developer Mode were written to implement the delivery sequences for the tests. Delivered dose patterns were captured with radiographic film or the electronic portal imaging device. The couch translational accuracy in dynamic treatment mode was 0.01 cm. Rotational accuracy was within 0.3°, with 0.04 cm displacement of the rotational axis. Dose intensity profiles capturing the velocity constancy and accuracy for translations and rotation exhibited standard deviation and maximum deviations below 3%. For complex delivery involving MLC and couch motions, the overall translational accuracy for reproducing programmed patterns was within 0.06 cm. The authors conclude that in Developer Mode, TrueBeam™ is capable of delivering dynamic treatment delivery techniques involving couch motion with good geometric and dosimetric fidelity.
Derivation of capture probabilities for the corotation eccentric mean motion resonances
NASA Astrophysics Data System (ADS)
El Moutamid, Maryame; Sicardy, Bruno; Renner, Stéfan
2017-08-01
We study in this paper the capture of a massless particle into an isolated, first-order corotation eccentric resonance (CER), in the framework of the planar, eccentric and restricted three-body problem near a m + 1: m mean motion commensurability (m integer). While capture into Lindblad eccentric resonances (where the perturber's orbit is circular) has been investigated years ago, capture into CER (where the perturber's orbit is elliptic) has not yet been investigated in detail. Here, we derive the generic equations of motion near a CER in the general case where both the perturber and the test particle migrate. We derive the probability of capture in that context, and we examine more closely two particular cases: (I) if only the perturber is migrating, capture is possible only if the migration is outward from the primary. Notably, the probability of capture is independent of the way the perturber migrates outward; (II) if only the test particle is migrating, then capture is possible only if the algebraic value of its migration rate is a decreasing function of orbital radius. In this case, the probability of capture is proportional to the radial gradient of migration. These results differ from the capture into Lindblad eccentric resonance (LER), where it is necessary that the orbits of the perturber and the test particle converge for capture to be possible.
Banach, Marzena; Wasilewska, Agnieszka; Dlugosz, Rafal; Pauk, Jolanta
2018-05-18
Due to the problem of aging societies, there is a need for smart buildings to monitor and support people with various disabilities, including rheumatoid arthritis. The aim of this paper is to elaborate on novel techniques for wireless motion capture systems for the monitoring and rehabilitation of disabled people for application in smart buildings. The proposed techniques are based on cross-verification of distance measurements between markers and transponders in an environment with highly variable parameters. To their verification, algorithms that enable comprehensive investigation of a system with different numbers of transponders and varying ambient parameters (temperature and noise) were developed. In the estimation of the real positions of markers, various linear and nonlinear filters were used. Several thousand tests were carried out for various system parameters and different marker locations. The results show that localization error may be reduced by as much as 90%. It was observed that repetition of measurement reduces localization error by as much as one order of magnitude. The proposed system, based on wireless techniques, offers a high commercial potential. However, it requires extensive cooperation between teams, including hardware and software design, system modelling, and architectural design.
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
Capture of visual direction in dynamic vergence is reduced with flashed monocular lines.
Jaschinski, Wolfgang; Jainta, Stephanie; Schürer, Michael
2006-08-01
The visual direction of a continuously presented monocular object is captured by the visual direction of a closely adjacent binocular object, which questions the reliability of nonius lines for measuring vergence. This was shown by Erkelens, C. J., and van Ee, R. (1997a,b) [Capture of the visual direction: An unexpected phenomenon in binocular vision. Vision Research, 37, 1193-1196; Capture of the visual direction of monocular objects by adjacent binocular objects. Vision Research, 37, 1735-1745] stimulating dynamic vergence by a counter phase oscillation of two square random-dot patterns (one to each eye) that contained a smaller central dot-free gap (of variable width) with a vertical monocular line oscillating in phase with the random-dot pattern of the respective eye; subjects adjusted the motion-amplitude of the line until it was perceived as (nearly) stationary. With a continuously presented monocular line, we replicated capture of visual direction provided the dot-free gap was narrow: the adjusted motion-amplitude of the line was similar as the motion-amplitude of the random-dot pattern, although large vergence errors occurred. However, when we flashed the line for 67 ms at the moments of maximal and minimal disparity of the vergence stimulus, we found that the adjusted motion-amplitude of the line was smaller; thus, the capture effect appeared to be reduced with flashed nonius lines. Accordingly, we found that the objectively measured vergence gain was significantly correlated (r=0.8) with the motion-amplitude of the flashed monocular line when the separation between the line and the fusion contour was at least 32 min arc. In conclusion, if one wishes to estimate the dynamic vergence response with psychophysical methods, effects of capture of visual direction can be reduced by using flashed nonius lines.
Biomechanical analysis of the circular friction hand massage.
Ryu, Jeseong; Son, Jongsang; Ahn, Soonjae; Shin, Isu; Kim, Youngho
2015-01-01
A massage can be beneficial to relieve muscle tension on the neck and shoulder area. Various massage systems have been developed, but their motions are not uniform throughout different body parts nor specifically targeted to the neck and shoulder areas. Pressure pattern and finger movement trajectories of the circular friction hand massage on trapezius, levator scapulae, and deltoid muscles were determined to develop a massage system that can mimic the motion and the pressure of the circular friction massage. During the massage, finger movement trajectories were measured using a 3D motion capture system, and finger pressures were simultaneously obtained using a grip pressure sensor. Results showed that each muscle had different finger movement trajectory and pressure pattern. The trapezius muscle experienced a higher pressure, longer massage time (duration of pressurization), and larger pressure-time integral than the other muscles. These results could be useful to design a better massage system simulating human finger movements.
3D video-based deformation measurement of the pelvis bone under dynamic cyclic loading
2011-01-01
Background Dynamic three-dimensional (3D) deformation of the pelvic bones is a crucial factor in the successful design and longevity of complex orthopaedic oncological implants. The current solutions are often not very promising for the patient; thus it would be interesting to measure the dynamic 3D-deformation of the whole pelvic bone in order to get a more realistic dataset for a better implant design. Therefore we hypothesis if it would be possible to combine a material testing machine with a 3D video motion capturing system, used in clinical gait analysis, to measure the sub millimetre deformation of a whole pelvis specimen. Method A pelvis specimen was placed in a standing position on a material testing machine. Passive reflective markers, traceable by the 3D video motion capturing system, were fixed to the bony surface of the pelvis specimen. While applying a dynamic sinusoidal load the 3D-movement of the markers was recorded by the cameras and afterwards the 3D-deformation of the pelvis specimen was computed. The accuracy of the 3D-movement of the markers was verified with 3D-displacement curve with a step function using a manual driven 3D micro-motion-stage. Results The resulting accuracy of the measurement system depended on the number of cameras tracking a marker. The noise level for a marker seen by two cameras was during the stationary phase of the calibration procedure ± 0.036 mm, and ± 0.022 mm if tracked by 6 cameras. The detectable 3D-movement performed by the 3D-micro-motion-stage was smaller than the noise level of the 3D-video motion capturing system. Therefore the limiting factor of the setup was the noise level, which resulted in a measurement accuracy for the dynamic test setup of ± 0.036 mm. Conclusion This 3D test setup opens new possibilities in dynamic testing of wide range materials, like anatomical specimens, biomaterials, and its combinations. The resulting 3D-deformation dataset can be used for a better estimation of material characteristics of the underlying structures. This is an important factor in a reliable biomechanical modelling and simulation as well as in a successful design of complex implants. PMID:21762533
Gyroscope-reduced inertial navigation system for flight vehicle motion estimation
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiao, Lu
2017-01-01
In this paper, a novel configuration of strategically distributed accelerometer sensors with the aid of one gyro to infer a flight vehicle's angular motion is presented. The MEMS accelerometer and gyro sensors are integrated to form a gyroscope-reduced inertial measurement unit (GR-IMU). The motivation for gyro aided accelerometers array is to have direct measurements of angular rates, which is an improvement to the traditional gyroscope-free inertial system that employs only direct measurements of specific force. Some technical issues regarding error calibration in accelerometers and gyro in GR-IMU are put forward. The GR-IMU based inertial navigation system can be used to find a complete attitude solution for flight vehicle motion estimation. Results of numerical simulation are given to illustrate the effectiveness of the proposed configuration. The gyroscope-reduced inertial navigation system based on distributed accelerometer sensors can be developed into a cost effective solution for a fast reaction, MEMS based motion capture system. Future work will include the aid from external navigation references (e.g. GPS) to improve long time mission performance.
Computational modeling of magnetic nanoparticle targeting to stent surface under high gradient field
Wang, Shunqiang; Zhou, Yihua; Tan, Jifu; Xu, Jiang; Yang, Jie; Liu, Yaling
2014-01-01
A multi-physics model was developed to study the delivery of magnetic nanoparticles (MNPs) to the stent-implanted region under an external magnetic field. The model is firstly validated by experimental work in literature. Then, effects of external magnetic field strength, magnetic particle size, and flow velocity on MNPs’ targeting and binding have been analyzed through a parametric study. Two new dimensionless numbers were introduced to characterize relative effects of Brownian motion (BM), magnetic force induced particle motion, and convective blood flow on MNPs motion. It was found that larger magnetic field strength, bigger MNP size, and slower flow velocity increase the capture efficiency of MNPs. The distribution of captured MNPs on the vessel along axial and azimuthal directions was also discussed. Results showed that the MNPs density decreased exponentially along axial direction after one-dose injection while it was uniform along azimuthal direction in the whole stented region (averaged over all sections). For the beginning section of the stented region, the density ratio distribution of captured MNPs along azimuthal direction is center-symmetrical, corresponding to the center-symmetrical distribution of magnetic force in that section. Two different generation mechanisms are revealed to form four main attraction regions. These results could serve as guidelines to design a better magnetic drug delivery system. PMID:24653546
High-resolution motion-compensated imaging photoplethysmography for remote heart rate monitoring
NASA Astrophysics Data System (ADS)
Chung, Audrey; Wang, Xiao Yu; Amelard, Robert; Scharfenberger, Christian; Leong, Joanne; Kulinski, Jan; Wong, Alexander; Clausi, David A.
2015-03-01
We present a novel non-contact photoplethysmographic (PPG) imaging system based on high-resolution video recordings of ambient reflectance of human bodies that compensates for body motion and takes advantage of skin erythema fluctuations to improve measurement reliability for the purpose of remote heart rate monitoring. A single measurement location for recording the ambient reflectance is automatically identified on an individual, and the motion for the location is determined over time via measurement location tracking. Based on the determined motion information motion-compensated reflectance measurements at different wavelengths for the measurement location can be acquired, thus providing more reliable measurements for the same location on the human over time. The reflectance measurement is used to determine skin erythema fluctuations over time, resulting in the capture of a PPG signal with a high signal-to-noise ratio. To test the efficacy of the proposed system, a set of experiments involving human motion in a front-facing position were performed under natural ambient light. The experimental results demonstrated that skin erythema fluctuations can achieve noticeably improved average accuracy in heart rate measurement when compared to previously proposed non-contact PPG imaging systems.
Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael
2013-09-01
Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.
Construction of exact constants of motion and effective models for many-body localized systems
NASA Astrophysics Data System (ADS)
Goihl, M.; Gluza, M.; Krumnow, C.; Eisert, J.
2018-04-01
One of the defining features of many-body localization is the presence of many quasilocal conserved quantities. These constants of motion constitute a cornerstone to an intuitive understanding of much of the phenomenology of many-body localized systems arising from effective Hamiltonians. They may be seen as local magnetization operators smeared out by a quasilocal unitary. However, accurately identifying such constants of motion remains a challenging problem. Current numerical constructions often capture the conserved operators only approximately, thus restricting a conclusive understanding of many-body localization. In this work, we use methods from the theory of quantum many-body systems out of equilibrium to establish an alternative approach for finding a complete set of exact constants of motion which are in addition guaranteed to represent Pauli-z operators. By this we are able to construct and investigate the proposed effective Hamiltonian using exact diagonalization. Hence, our work provides an important tool expected to further boost inquiries into the breakdown of transport due to quenched disorder.
Key features of hip hop dance motions affect evaluation by judges.
Sato, Nahoko; Nunome, Hiroyuki; Ikegami, Yasuo
2014-06-01
The evaluation of hip hop dancers presently lacks clearly defined criteria and is often dependent on the subjective impressions of judges. Our study objective was to extract hidden motion characteristics that could potentially distinguish the skill levels of hip hop dancers and to examine the relationship between performance kinematics and judging scores. Eleven expert, six nonexpert, and nine novice dancers participated in the study, where each performed the "wave" motion as an experimental task. The movements of their upper extremities were captured by a motion capture system, and several kinematic parameters including the propagation velocity of the wave were calculated. Twelve judges evaluated the performances of the dancers, and we compared the kinematic parameters of the three groups and examined the relationship between the judging scores and the kinematic parameters. We found the coefficient of variation of the propagation velocity to be significantly different among the groups (P < .01) and highly correlated with the judging scores (r = -0.800, P < .01). This revealed that the variation of propagation velocity was the most dominant variable representing the skill level of the dancers and that the smooth propagation of the wave was most closely related to the evaluation by judges.
ERIC Educational Resources Information Center
Su, Chung-Ho; Cheng, Ching-Hsue
2016-01-01
This study aims to explore the factors in a patient's rehabilitation achievement after a total knee replacement (TKR) patient exercises, using a PCA-ANFIS emotion model-based game rehabilitation system, which combines virtual reality (VR) and motion capture technology. The researchers combine a principal component analysis (PCA) and an adaptive…
Real-time physics-based 3D biped character animation using an inverted pendulum model.
Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee
2010-01-01
We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.
Wang, Ao; Song, Qiang; Ji, Bingqiang; Yao, Qiang
2015-12-01
As a key mechanism of submicron particle capture in wet deposition and wet scrubbing processes, thermophoresis is influenced by the flow and temperature fields. Three-dimensional direct numerical simulations were conducted to quantify the characteristics of the flow and temperature fields around a droplet at three droplet Reynolds numbers (Re) that correspond to three typical boundary-layer-separation flows (steady axisymmetric, steady plane-symmetric, and unsteady plane-symmetric flows). The thermophoretic motion of submicron particles was simulated in these cases. Numerical results show that the motion of submicron particles around the droplet and the deposition distribution exhibit different characteristics under three typical flow forms. The motion patterns of particles are dependent on their initial positions in the upstream and flow forms. The patterns of particle motion and deposition are diversified as Re increases. The particle motion pattern, initial position of captured particles, and capture efficiency change periodically, especially during periodic vortex shedding. The key effects of flow forms on particle motion are the shape and stability of the wake behind the droplet. The drag force of fluid and the thermophoretic force in the wake contribute jointly to the deposition of submicron particles after the boundary-layer separation around a droplet.
A proposal for a new definition of the axial rotation angle of the shoulder joint.
Masuda, Tadashi; Ishida, Akimasa; Cao, Lili; Morita, Sadao
2008-02-01
The Euler/Cardan angles are commonly used to define the motions of the upper arm with respect to the trunk. This definition, however, has a problem in that the angles of both the horizontal flexion/extension and the axial rotation of the shoulder joint become unstable at the gimbal-lock positions. In this paper, a new definition of the axial rotation angle was proposed. The proposed angle was stable over the entire range of the shoulder motion. With the new definition, the neutral position of the axial rotation agreed with that in the conventional anatomy. The advantage of the new definition was demonstrated by measuring actual complex motions of the shoulder with a three-dimensional motion capture system.
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
Biofidelic Human Activity Modeling and Simulation with Large Variability
2014-11-25
A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to
Motion capture based identification of the human body inertial parameters.
Venture, Gentiane; Ayusawa, Ko; Nakamura, Yoshihiko
2008-01-01
Identification of body inertia, masses and center of mass is an important data to simulate, monitor and understand dynamics of motion, to personalize rehabilitation programs. This paper proposes an original method to identify the inertial parameters of the human body, making use of motion capture data and contact forces measurements. It allows in-vivo painless estimation and monitoring of the inertial parameters. The method is described and then obtained experimental results are presented and discussed.
Ostaszewski, Michal; Pauk, Jolanta
2018-05-16
Gait analysis is a useful tool medical staff use to support clinical decision making. There is still an urgent need to develop low-cost and unobtrusive mobile health monitoring systems. The goal of this study was twofold. Firstly, a wearable sensor system composed of plantar pressure insoles and wearable sensors for joint angle measurement was developed. Secondly, the accuracy of the system in the measurement of ground reaction forces and joint moments was examined. The measurements included joint angles and plantar pressure distribution. To validate the wearable sensor system and examine the effectiveness of the proposed method for gait analysis, an experimental study on ten volunteer subjects was conducted. The accuracy of measurement of ground reaction forces and joint moments was validated against the results obtained from a reference motion capture system. Ground reaction forces and joint moments measured by the wearable sensor system showed a root mean square error of 1% for min. GRF and 27.3% for knee extension moment. The correlation coefficient was over 0.9, in comparison with the stationary motion capture system. The study suggests that the wearable sensor system could be recommended both for research and clinical applications outside a typical gait laboratory.
G-DYN Multibody Dynamics Engine
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Broderick, Daniel
2011-01-01
G-DYN is a multi-body dynamic simulation software engine that automatically assembles and integrates equations of motion for arbitrarily connected multibody dynamic systems. The algorithm behind G-DYN is based on a primal-dual formulation of the dynamics that captures the position and velocity vectors (primal variables) of each body and the interaction forces (dual variables) between bodies, which are particularly useful for control and estimation analysis and synthesis. It also takes full advantage of the spare matrix structure resulting from the system dynamics to numerically integrate the equations of motion efficiently. Furthermore, the dynamic model for each body can easily be replaced without re-deriving the overall equations of motion, and the assembly of the equations of motion is done automatically. G-DYN proved an essential software tool in the simulation of spacecraft systems used for small celestial body surface sampling, specifically in simulating touch-and-go (TAG) maneuvers of a robotic sampling system from a comet and asteroid. It is used extensively in validating mission concepts for small body sample return, such as Comet Odyssey and Galahad New Frontiers proposals.
NASA Astrophysics Data System (ADS)
Duffy, M.; Richardson, T. J.; Craythorne, E.; Mallipeddi, R.; Coleman, A. J.
2014-02-01
A system has been developed to assess the feasibility of using motion tracking to enable pre-surgical margin mapping of basal cell carcinoma (BCC) in the clinic using optical coherence tomography (OCT). This system consists of a commercial OCT imaging system (the VivoSight 1500, MDL Ltd., Orpington, UK), which has been adapted to incorporate a webcam and a single-sensor electromagnetic positional tracking module (the Flock of Birds, Ascension Technology Corp, Vermont, USA). A supporting software interface has also been developed which allows positional data to be captured and projected onto a 2D dermoscopic image in real-time. Initial results using a stationary test phantom are encouraging, with maximum errors in the projected map in the order of 1-2mm. Initial clinical results were poor due to motion artefact, despite attempts to stabilise the patient. However, the authors present several suggested modifications that are expected to reduce the effects of motion artefact and improve the overall accuracy and clinical usability of the system.
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
Hidden marker position estimation during sit-to-stand with walker.
Yoon, Sang Ho; Jun, Hong Gul; Dan, Byung Ju; Jo, Byeong Rim; Min, Byung Hoon
2012-01-01
Motion capture analysis of sit-to-stand task with assistive device is hard to achieve due to obstruction on reflective makers. Previously developed robotic system, Smart Mobile Walker, is used as an assistive device to perform motion capture analysis in sit-to-stand task. All lower limb markers except hip markers are invisible through whole session. The link-segment and regression method is applied to estimate the marker position during sit-to-stand. Applying a new method, the lost marker positions are restored and the biomechanical evaluation of the sit-to-stand movement with a Smart Mobile Walker could be carried out. The accuracy of the marker position estimation is verified with normal sit-to-stand data from more than 30 clinical trials. Moreover, further research on improving the link segment and regression method is addressed.
Chanpimol, Shane; Seamon, Bryant; Hernandez, Haniel; Harris-Love, Michael; Blackman, Marc R
2017-01-01
Motion capture virtual reality-based rehabilitation has become more common. However, therapists face challenges to the implementation of virtual reality (VR) in clinical settings. Use of motion capture technology such as the Xbox Kinect may provide a useful rehabilitation tool for the treatment of postural instability and cardiovascular deconditioning in individuals with chronic severe traumatic brain injury (TBI). The primary purpose of this study was to evaluate the effects of a Kinect-based VR intervention using commercially available motion capture games on balance outcomes for an individual with chronic TBI. The secondary purpose was to assess the feasibility of this intervention for eliciting cardiovascular adaptations. A single system experimental design ( n = 1) was utilized, which included baseline, intervention, and retention phases. Repeated measures were used to evaluate the effects of an 8-week supervised exercise intervention using two Xbox One Kinect games. Balance was characterized using the dynamic gait index (DGI), functional reach test (FRT), and Limits of Stability (LOS) test on the NeuroCom Balance Master. The LOS assesses end-point excursion (EPE), maximal excursion (MXE), and directional control (DCL) during weight-shifting tasks. Cardiovascular and activity measures were characterized by heart rate at the end of exercise (HRe), total gameplay time (TAT), and time spent in a therapeutic heart rate (TTR) during the Kinect intervention. Chi-square and ANOVA testing were used to analyze the data. Dynamic balance, characterized by the DGI, increased during the intervention phase χ 2 (1, N = 12) = 12, p = .001. Static balance, characterized by the FRT showed no significant changes. The EPE increased during the intervention phase in the backward direction χ 2 (1, N = 12) = 5.6, p = .02, and notable improvements of DCL were demonstrated in all directions. HRe ( F (2,174) = 29.65, p = < .001) and time in a TTR ( F (2, 12) = 4.19, p = .04) decreased over the course of the intervention phase. Use of a supervised Kinect-based program that incorporated commercial games improved dynamic balance for an individual post severe TBI. Additionally, moderate cardiovascular activity was achieved through motion capture gaming. Further studies appear warranted to determine the potential therapeutic utility of commercial VR games in this patient population. Clinicaltrial.gov ID - NCT02889289.
Store-and-feedforward adaptive gaming system for hand-finger motion tracking in telerehabilitation.
Lockery, Daniel; Peters, James F; Ramanna, Sheela; Shay, Barbara L; Szturm, Tony
2011-05-01
This paper presents a telerehabilitation system that encompasses a webcam and store-and-feedforward adaptive gaming system for tracking finger-hand movement of patients during local and remote therapy sessions. Gaming-event signals and webcam images are recorded as part of a gaming session and then forwarded to an online healthcare content management system (CMS) that separates incoming information into individual patient records. The CMS makes it possible for clinicians to log in remotely and review gathered data using online reports that are provided to help with signal and image analysis using various numerical measures and plotting functions. Signals from a 6 degree-of-freedom magnetic motion tracking system provide a basis for video-game sprite control. The MMT provides a path for motion signals between common objects manipulated by a patient and a computer game. During a therapy session, a webcam that captures images of the hand together with a number of performance metrics provides insight into the quality, efficiency, and skill of a patient.
A reduced basis method for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Vincent-Finley, Rachel Elisabeth
In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.
Storage, retrieval, and edit of digital video using Motion JPEG
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Lee, D. H.
1994-04-01
In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.
Development of method for quantifying essential tremor using a small optical device.
Chen, Kai-Hsiang; Lin, Po-Chieh; Chen, Yu-Jung; Yang, Bing-Shiang; Lin, Chin-Hsien
2016-06-15
Clinical assessment scales are the most common means used by physicians to assess tremor severity. Some scientific tools that may be able to replace these scales to objectively assess the severity, such as accelerometers, digital tablets, electromyography (EMG) measurement devices, and motion capture cameras, are currently available. However, most of the operational modes of these tools are relatively complex or are only able to capture part of the clinical information; furthermore, using these tools is sometimes time consuming. Currently, there is no tool available for automatically quantifying tremor severity in clinical environments. We aimed to develop a rapid, objective, and quantitative system for measuring the severity of finger tremor using a small portable optical device (Leap Motion). A single test took 15s to conduct, and three algorithms were proposed to quantify the severity of finger tremor. The system was tested with four patients diagnosed with essential tremor. The proposed algorithms were able to quantify different characteristics of tremor in clinical environments, and could be used as references for future clinical assessments. A portable, easy-to-use, small-sized, and noncontact device (Leap Motion) was used to clinically detect and record finger movement, and three algorithms were proposed to describe tremor amplitudes. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hsu, Wen-Chun; Shih, Ju-Ling
2016-01-01
In this study, to learn the routine of Tantui, a branch of martial arts was taken as an object of research. Fitts' stages of motor learning and augmented reality (AR) were applied to a 3D mobile-assisted learning system for martial arts, which was characterized by free viewing angles. With the new system, learners could rotate the viewing angle of…
Estimating Physical Activity Energy Expenditure with the Kinect Sensor in an Exergaming Environment
Nathan, David; Huynh, Du Q.; Rubenson, Jonas; Rosenberg, Michael
2015-01-01
Active video games that require physical exertion during game play have been shown to confer health benefits. Typically, energy expended during game play is measured using devices attached to players, such as accelerometers, or portable gas analyzers. Since 2010, active video gaming technology incorporates marker-less motion capture devices to simulate human movement into game play. Using the Kinect Sensor and Microsoft SDK this research aimed to estimate the mechanical work performed by the human body and estimate subsequent metabolic energy using predictive algorithmic models. Nineteen University students participated in a repeated measures experiment performing four fundamental movements (arm swings, standing jumps, body-weight squats, and jumping jacks). Metabolic energy was captured using a Cortex Metamax 3B automated gas analysis system with mechanical movement captured by the combined motion data from two Kinect cameras. Estimations of the body segment properties, such as segment mass, length, centre of mass position, and radius of gyration, were calculated from the Zatsiorsky-Seluyanov's equations of de Leva, with adjustment made for posture cost. GPML toolbox implementation of the Gaussian Process Regression, a locally weighted k-Nearest Neighbour Regression, and a linear regression technique were evaluated for their performance on predicting the metabolic cost from new feature vectors. The experimental results show that Gaussian Process Regression outperformed the other two techniques by a small margin. This study demonstrated that physical activity energy expenditure during exercise, using the Kinect camera as a motion capture system, can be estimated from segmental mechanical work. Estimates for high-energy activities, such as standing jumps and jumping jacks, can be made accurately, but for low-energy activities, such as squatting, the posture of static poses should be considered as a contributing factor. When translated into the active video gaming environment, the results could be incorporated into game play to more accurately control the energy expenditure requirements. PMID:26000460
Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.
Felt, Wyatt; Chin, Khai Yi; Remy, C David
2017-09-01
This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.
A common framework for the analysis of complex motion? Standstill and capture illusions
Dürsteler, Max R.
2014-01-01
A series of illusions was created by presenting stimuli, which consisted of two overlapping surfaces each defined by textures of independent visual features (i.e., modulation of luminance, color, depth, etc.). When presented concurrently with a stationary 2-D luminance texture, observers often fail to perceive the motion of an overlapping stereoscopically defined depth-texture. This illusory motion standstill arises due to a failure to represent two independent surfaces (one for luminance and one for depth textures) and motion transparency (the ability to perceive motion of both surfaces simultaneously). Instead the stimulus is represented as a single non-transparent surface taking on the stationary nature of the luminance-defined texture. By contrast, if it is the 2D-luminance defined texture that is in motion, observers often perceive the stationary depth texture as also moving. In this latter case, the failure to represent the motion transparency of the two textures gives rise to illusionary motion capture. Our past work demonstrated that the illusions of motion standstill and motion capture can occur for depth-textures that are rotating, or expanding / contracting, or else spiraling. Here I extend these findings to include stereo-shearing. More importantly, it is the motion (or lack thereof) of the luminance texture that determines how the motion of the depth will be perceived. This observation is strongly in favor of a single pathway for complex motion that operates on luminance-defines texture motion signals only. In addition, these complex motion illusions arise with chromatically-defined textures with smooth transitions between their colors. This suggests that in respect to color motion perception the complex motions' pathway is only able to accurately process signals from isoluminant colored textures with sharp transitions between colors, and/or moving at high speeds, which is conceivable if it relies on inputs from a hypothetical dual opponent color pathway. PMID:25566023
Childers, Walter Lee; Siebert, Steven
2016-12-01
Limb movement between the residuum and socket continues to be an underlying factor in limb health, prosthetic comfort, and gait performance yet techniques to measure this have been underdeveloped. Develop a method to measure motion between the residual limb and a transtibial prosthetic socket. Single subject, repeated measures with mathematical modeling. The gait of a participant with transtibial amputation was recorded using a motion capture system using a marker set that included arrays on the anterior distal tibia and the lateral epicondyle of the femur. The proximal or distal translation, anterior or posterior translation, and angular movements were quantified. A random Monte Carlo simulation based on the precision of the motion capture system and a model of the bone moving under the skin explored the technique's accuracy. Residual limb tissue stiffness was modeled as a linear spring based on data from Papaioannou et al. Residuum movement relative to the socket went through ~30 mm, 18 mm, and 15° range of motion. Root mean squared errors were 5.47 mm, 1.86 mm, and 0.75° when considering the modeled bone-skin movement in the proximal or distal, anterior or posterior, and angular directions, respectively. The measured movement was greater than the root mean squared error, indicating that this method can measure motion between the residuum and socket. The ability to quantify movement between the residual limb and the prosthetic socket will improve prosthetic treatment through the evaluation of different prosthetic suspensions, socket designs, and motor control of the prosthetic interface. © The International Society for Prosthetics and Orthotics 2015.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
Kinematic discrimination of ataxia in horses is facilitated by blindfolding.
Olsen, E; FouchÉ, N; Jordan, H; Pfau, T; Piercy, R J
2018-03-01
Agreement among experienced clinicians is poor when assessing the presence and severity of ataxia, especially when signs are mild. Consequently, objective gait measurements might be beneficial for assessment of horses with neurological diseases. To assess diagnostic criteria using motion capture to measure variability in spatial gait-characteristics and swing duration derived from ataxic and non-ataxic horses, and to assess if variability increases with blindfolding. Cross-sectional. A total of 21 horses underwent measurements in a gait laboratory and live neurological grading by multiple raters. In the gait laboratory, the horses were made to walk across a runway surrounded by a 12-camera motion capture system with a sample frequency of 240 Hz. They were made to walk normally and with a blindfold in at least three trials each. Displacements of reflective markers on head, fetlock, hoof, fourth lumbar vertebra, tuber coxae and sacrum derived from three to four consecutive strides were processed and descriptive statistics, receiver operator characteristics (ROC) to determine the diagnostic sensitivity, specificity and area under the curve (AUC), and correlation between median ataxia grade and gait parameters were determined. For horses with a median ataxia grade ≥2, coefficient of variation for the location of maximum vertical displacement of pelvic and thoracic distal limbs generated good diagnostic yield. The hoofs of the thoracic limbs yielded an AUC of 0.81 with 64% sensitivity and 90% specificity. Blindfolding exacerbated the variation for ataxic horses compared to non-ataxic horses with the hoof marker having an AUC of 0.89 with 82% sensitivity and 90% specificity. The low number of consecutive strides per horse obtained with motion capture could decrease diagnostic utility. Motion capture can objectively aid the assessment of horses with ataxia. Furthermore, blindfolding increases variation in distal pelvic limb kinematics making it a useful clinical tool. © 2017 EVJ Ltd.
Neural network architecture for form and motion perception (Abstract Only)
NASA Astrophysics Data System (ADS)
Grossberg, Stephen
1991-08-01
Evidence is given for a new neural network theory of biological motion perception, a motion boundary contour system. This theory clarifies why parallel streams V1 yields V2 and V1 yields MT exist for static form and motion form processing among the areas V1, V2, and MT of visual cortex. The motion boundary contour system consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a motion oriented contrast (MOC) filter, for preprocessing moving images; and a cooperative-competitive feedback (CC) loop, for generating emergent boundary segmentations of the filtered signals. The present work uses the MOC filter to explain a variety of classical and recent data about short-range and long- range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed- up of motion velocity as interflash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte''s Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90 degree(s), whereas opposite directions differ by 180 degree(s), and why a cortical stream V1 yields V2 yields MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the motion boundary contour system design.
Evaluation of a video-based head motion tracking system for dedicated brain PET
NASA Astrophysics Data System (ADS)
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
2015-03-01
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
System for clinical photometric stereo endoscopy
NASA Astrophysics Data System (ADS)
Durr, Nicholas J.; González, Germán.; Lim, Daryl; Traverso, Giovanni; Nishioka, Norman S.; Vakoc, Benjamin J.; Parot, Vicente
2014-02-01
Photometric stereo endoscopy is a technique that captures information about the high-spatial-frequency topography of the field of view simultaneously with a conventional color image. Here we describe a system that will enable photometric stereo endoscopy to be clinically evaluated in the large intestine of human patients. The clinical photometric stereo endoscopy system consists of a commercial gastroscope, a commercial video processor, an image capturing and processing unit, custom synchronization electronics, white light LEDs, a set of four fibers with diffusing tips, and an alignment cap. The custom pieces that come into contact with the patient are composed of biocompatible materials that can be sterilized before use. The components can then be assembled in the endoscopy suite before use. The resulting endoscope has the same outer diameter as a conventional colonoscope (14 mm), plugs into a commercial video processor, captures topography and color images at 15 Hz, and displays the conventional color image to the gastroenterologist in real-time. We show that this system can capture a color and topographical video in a tubular colon phantom, demonstrating robustness to complex geometries and motion. The reported system is suitable for in vivo evaluation of photometric stereo endoscopy in the human large intestine.
Joint Video Stitching and Stabilization from Moving Cameras.
Guo, Heng; Liu, Shuaicheng; He, Tong; Zhu, Shuyuan; Zeng, Bing; Gabbouj, Moncef
2016-09-08
In this paper, we extend image stitching to video stitching for videos that are captured for the same scene simultaneously by multiple moving cameras. In practice, videos captured under this circumstance often appear shaky. Directly applying image stitching methods for shaking videos often suffers from strong spatial and temporal artifacts. To solve this problem, we propose a unified framework in which video stitching and stabilization are performed jointly. Specifically, our system takes several overlapping videos as inputs. We estimate both inter motions (between different videos) and intra motions (between neighboring frames within a video). Then, we solve an optimal virtual 2D camera path from all original paths. An enlarged field of view along the virtual path is finally obtained by a space-temporal optimization that takes both inter and intra motions into consideration. Two important components of this optimization are that (1) a grid-based tracking method is designed for an improved robustness, which produces features that are distributed evenly within and across multiple views, and (2) a mesh-based motion model is adopted for the handling of the scene parallax. Some experimental results are provided to demonstrate the effectiveness of our approach on various consumer-level videos and a Plugin, named "Video Stitcher" is developed at Adobe After Effects CC2015 to show the processed videos.
A stochastic approach to noise modeling for barometric altimeters.
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-11-18
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.
Kroll, Alexandra; Haramagatti, Chandrashekara R.; Lipinski, Hans-Gerd; Wiemann, Martin
2017-01-01
Darkfield and confocal laser scanning microscopy both allow for a simultaneous observation of live cells and single nanoparticles. Accordingly, a characterization of nanoparticle uptake and intracellular mobility appears possible within living cells. Single particle tracking allows to measure the size of a diffusing particle close to a cell. However, within the more complex system of a cell’s cytoplasm normal, confined or anomalous diffusion together with directed motion may occur. In this work we present a method to automatically classify and segment single trajectories into their respective motion types. Single trajectories were found to contain more than one motion type. We have trained a random forest with 9 different features. The average error over all motion types for synthetic trajectories was 7.2%. The software was successfully applied to trajectories of positive controls for normal- and constrained diffusion. Trajectories captured by nanoparticle tracking analysis served as positive control for normal diffusion. Nanoparticles inserted into a diblock copolymer membrane was used to generate constrained diffusion. Finally we segmented trajectories of diffusing (nano-)particles in V79 cells captured with both darkfield- and confocal laser scanning microscopy. The software called “TraJClassifier” is freely available as ImageJ/Fiji plugin via https://git.io/v6uz2. PMID:28107406
Motion detection using extended fractional Fourier transform and digital speckle photography.
Bhaduri, Basanta; Tay, C J; Quan, C; Sheppard, Colin J R
2010-05-24
Digital speckle photography is a useful tool for measuring the motion of optically rough surfaces from the speckle shift that takes place at the recording plane. A simple correlation based digital speckle photographic system has been proposed that implements two simultaneous optical extended fractional Fourier transforms (EFRTs) of different orders using only a single lens and detector to simultaneously detect both the magnitude and direction of translation and tilt by capturing only two frames: one before and another after the object motion. The dynamic range and sensitivity of the measurement can be varied readily by altering the position of the mirror/s used in the optical setup. Theoretical analysis and experiment results are presented.
Always chew your food: freshwater stingrays use mastication to process tough insect prey.
Kolmann, Matthew A; Welch, Kenneth C; Summers, Adam P; Lovejoy, Nathan R
2016-09-14
Chewing, characterized by shearing jaw motions and high-crowned molar teeth, is considered an evolutionary innovation that spurred dietary diversification and evolutionary radiation of mammals. Complex prey-processing behaviours have been thought to be lacking in fishes and other vertebrates, despite the fact that many of these animals feed on tough prey, like insects or even grasses. We investigated prey capture and processing in the insect-feeding freshwater stingray Potamotrygon motoro using high-speed videography. We find that Potamotrygon motoro uses asymmetrical motion of the jaws, effectively chewing, to dismantle insect prey. However, CT scanning suggests that this species has simple teeth. These findings suggest that in contrast to mammalian chewing, asymmetrical jaw action is sufficient for mastication in other vertebrates. We also determined that prey capture in these rays occurs through rapid uplift of the pectoral fins, sucking prey beneath the ray's body, thereby dissociating the jaws from a prey capture role. We suggest that the decoupling of prey capture and processing facilitated the evolution of a highly kinetic feeding apparatus in batoid fishes, giving these animals an ability to consume a wide variety of prey, including molluscs, fishes, aquatic insect larvae and crustaceans. We propose Potamotrygon as a model system for understanding evolutionary convergence of prey processing and chewing in vertebrates. © 2016 The Author(s).
Always chew your food: freshwater stingrays use mastication to process tough insect prey
Welch, Kenneth C.; Summers, Adam P.; Lovejoy, Nathan R.
2016-01-01
Chewing, characterized by shearing jaw motions and high-crowned molar teeth, is considered an evolutionary innovation that spurred dietary diversification and evolutionary radiation of mammals. Complex prey-processing behaviours have been thought to be lacking in fishes and other vertebrates, despite the fact that many of these animals feed on tough prey, like insects or even grasses. We investigated prey capture and processing in the insect-feeding freshwater stingray Potamotrygon motoro using high-speed videography. We find that Potamotrygon motoro uses asymmetrical motion of the jaws, effectively chewing, to dismantle insect prey. However, CT scanning suggests that this species has simple teeth. These findings suggest that in contrast to mammalian chewing, asymmetrical jaw action is sufficient for mastication in other vertebrates. We also determined that prey capture in these rays occurs through rapid uplift of the pectoral fins, sucking prey beneath the ray's body, thereby dissociating the jaws from a prey capture role. We suggest that the decoupling of prey capture and processing facilitated the evolution of a highly kinetic feeding apparatus in batoid fishes, giving these animals an ability to consume a wide variety of prey, including molluscs, fishes, aquatic insect larvae and crustaceans. We propose Potamotrygon as a model system for understanding evolutionary convergence of prey processing and chewing in vertebrates. PMID:27629029
Stavrakakis, S; Guy, J H; Syranidis, I; Johnson, G R; Edwards, S A
2015-07-01
Gait profiles were investigated in a cohort of female pigs experiencing a lameness period prevalence of 29% over 17 months. Gait alterations before and during visually diagnosed lameness were evaluated to identify the best quantitative clinical lameness indicators and early predictors for lameness. Pre-breeding gilts (n= 84) were recruited to the study over a period of 6 months, underwent motion capture every 5 weeks and, depending on their age at entry to the study, were followed for up to three successive gestations. Animals were subject to motion capture in each parity at 8 weeks of gestation and on the day of weaning (28 days postpartum). During kinematic motion capture, the pigs walked on the same concrete walkway and an array of infra-red cameras was used to collect three dimensional coordinate data of reflective skin markers attached to the head, trunk and limb anatomical landmarks. Of 24 pigs diagnosed with lameness, 19 had preclinical gait records, whilst 18 had a motion capture while lame. Depending on availability, data from one or two preclinical motion capture 1-11 months prior to lameness and on the day of lameness were analysed. Lameness was best detected and evaluated using relative spatiotemporal gait parameters, especially vertical head displacement and asymmetric stride phase timing. Irregularity in the step-to-stride length ratio was elevated (deviation ≥ 0.03) in young pigs which presented lameness in later life (odds ratio 7.2-10.8). Copyright © 2015 Elsevier Ltd. All rights reserved.
Three-dimensional finite element modelling of muscle forces during mastication.
Röhrle, Oliver; Pullan, Andrew J
2007-01-01
This paper presents a three-dimensional finite element model of human mastication. Specifically, an anatomically realistic model of the masseter muscles and associated bones is used to investigate the dynamics of chewing. A motion capture system is used to track the jaw motion of a subject chewing standard foods. The three-dimensional nonlinear deformation of the masseter muscles are calculated via the finite element method, using the jaw motion data as boundary conditions. Motion-driven muscle activation patterns and a transversely isotropic material law, defined in a muscle-fibre coordinate system, are used in the calculations. Time-force relationships are presented and analysed with respect to different tasks during mastication, e.g. opening, closing, and biting, and are also compared to a more traditional one-dimensional model. The results strongly suggest that, due to the complex arrangement of muscle force directions, modelling skeletal muscles as conventional one-dimensional lines of action might introduce a significant source of error.
Statistical data mining of streaming motion data for fall detection in assistive environments.
Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P
2011-01-01
The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.
Large-amplitude nuclear motion formulated in terms of dissipation of quantum fluctuations
NASA Astrophysics Data System (ADS)
Kuzyakin, R. A.; Sargsyan, V. V.; Adamian, G. G.; Antonenko, N. V.
2017-01-01
The potential-barrier penetrability and quasi-stationary thermal-decay rate of a metastable state are formulated in terms of microscopic quantum diffusion. Apart from linear coupling in momentum between the collective and internal subsystems, the formalism embraces the more general case of linear couplings in both the momentum and the coordinates. The developed formalism is then used for describing the process of projectile-nucleus capture by a target nucleus at incident energies near and below the Coulomb barrier. The capture partial probability, which determines the cross section for formation of a dinuclear system, is derived in analytical form. The total and partial capture cross sections, mean and root-mean-square angular momenta of the formed dinuclear system, astrophysical -factors, logarithmic derivatives, and barrier distributions are derived for various reactions. Also investigated are the effects of nuclear static deformation and neutron transfer between the interacting nuclei on the capture cross section and its isotopic dependence, and the entrance-channel effects on the capture process. The results of calculations for reactions involving both spherical and deformed nuclei are in good agreement with available experimental data.
Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian
2013-01-01
Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to the conditions of operation. PMID:24260324
An Integrated Calibration Technique for Stereo Vision Systems (PREPRINT)
2010-03-01
technique for stereo vision systems has been developed. To demonstrate and evaluate this calibration technique, multiple Wii Remotes (Wiimotes) from Nintendo ...from Nintendo were used to form stereo vision systems to perform 3D motion capture in real time. This integrated technique is a two-step process...Wiimotes) used in Nintendo Wii games. Many researchers have successfully dealt with the problem of camera calibration by taking images from a 2D
Romero, Veronica; Amaral, Joseph; Fitzpatrick, Paula; Schmidt, R C; Duncan, Amie W; Richardson, Michael J
2017-04-01
Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2015-10-01
the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external
2003-07-01
volunteer was asked to report wearing Battle Dress Uniform or Under Armor Undergarment) because the reflective markers used for motion capture needed to be...data collection sessions wearing Under Armor Undergarment, combat boots, integrated body armor and Scorpion helmet. Subjects were given time to
Computer-assisted sperm analysis (CASA): capabilities and potential developments.
Amann, Rupert P; Waberski, Dagmar
2014-01-01
Computer-assisted sperm analysis (CASA) systems have evolved over approximately 40 years, through advances in devices to capture the image from a microscope, huge increases in computational power concurrent with amazing reduction in size of computers, new computer languages, and updated/expanded software algorithms. Remarkably, basic concepts for identifying sperm and their motion patterns are little changed. Older and slower systems remain in use. Most major spermatology laboratories and semen processing facilities have a CASA system, but the extent of reliance thereon ranges widely. This review describes capabilities and limitations of present CASA technology used with boar, bull, and stallion sperm, followed by possible future developments. Each marketed system is different. Modern CASA systems can automatically view multiple fields in a shallow specimen chamber to capture strobe-like images of 500 to >2000 sperm, at 50 or 60 frames per second, in clear or complex extenders, and in <2 minutes, store information for ≥ 30 frames and provide summary data for each spermatozoon and the population. A few systems evaluate sperm morphology concurrent with motion. CASA cannot accurately predict 'fertility' that will be obtained with a semen sample or subject. However, when carefully validated, current CASA systems provide information important for quality assurance of semen planned for marketing, and for the understanding of the diversity of sperm responses to changes in the microenvironment in research. The four take-home messages from this review are: (1) animal species, extender or medium, specimen chamber, intensity of illumination, imaging hardware and software, instrument settings, technician, etc., all affect accuracy and precision of output values; (2) semen production facilities probably do not need a substantially different CASA system whereas biology laboratories would benefit from systems capable of imaging and tracking sperm in deep chambers for a flexible period of time; (3) software should enable grouping of individual sperm based on one or more attributes so outputs reflect subpopulations or clusters of similar sperm with unique properties; means or medians for the total population are insufficient; and (4) a field-use, portable CASA system for measuring one motion and two or three morphology attributes of individual sperm is needed for field theriogenologists or andrologists working with human sperm outside urban centers; appropriate hardware to capture images and process data apparently are available. Copyright © 2014 Elsevier Inc. All rights reserved.
The behavior of bouncing disks and pizza tossing
NASA Astrophysics Data System (ADS)
Liu, K.-C.; Friend, J.; Yeo, L.
2009-03-01
We investigate the dynamics of a disk bouncing on a vibrating platform - a variation of the classic bouncing ball problem - that captures the physics of pizza tossing and the operation of certain standing-wave ultrasonic motors (SWUMs). The system's dynamics explains why certain tossing motions are used by dough-toss performers for different tricks: a helical trajectory is used in single tosses because it maximizes energy efficiency and the dough's airborne rotational speed, a semi-elliptical motion is used in multiple tosses because it is easier for maintaining dough rotation at the maximum rotational speed. The system's bifurcation diagram and basins of attraction also informs SWUM designers about the optimal design for high speed and minimal sensitivity to perturbation.
Data fusion of multiple kinect sensors for a rehabilitation system.
Huibin Du; Yiwen Zhao; Jianda Han; Zheng Wang; Guoli Song
2016-08-01
Kinect-like depth sensors have been widely used in rehabilitation systems. However, single depth sensor processes limb-blocking, data loss or data error poorly, making it less reliable. This paper focus on using two Kinect sensors and data fusion method to solve these problems. First, two Kinect sensors capture the motion data of the healthy arm of the hemiplegic patient; Second, merge the data using the method of Set-Membership-Filter (SMF); Then, mirror this motion data by the Middle-Plane; In the end, control the wearable robotic arm driving the patient's paralytic arm so that the patient can interactively and initiatively complete a variety of recovery actions prompted by computer with 3D animation games.
NASA Astrophysics Data System (ADS)
Tseng, Yolanda D.; Wootton, Landon; Nyflot, Matthew; Apisarnthanarax, Smith; Rengan, Ramesh; Bloch, Charles; Sandison, George; St. James, Sara
2018-01-01
Four dimensional computed tomography (4DCT) scans are routinely used in radiation therapy to determine the internal treatment volume for targets that are moving (e.g. lung tumors). The use of these studies has allowed clinicians to create target volumes based upon the motion of the tumor during the imaging study. The purpose of this work is to determine if a target volume based on a single 4DCT scan at simulation is sufficient to capture thoracic motion. Phantom studies were performed to determine expected differences between volumes contoured on 4DCT scans and those on the evaluation CT scans (slow scans). Evaluation CT scans acquired during treatment of 11 patients were compared to the 4DCT scans used for treatment planning. The images were assessed to determine if the target remained within the target volume determined during the first 4DCT scan. A total of 55 slow scans were compared to the 11 planning 4DCT scans. Small differences were observed in phantom between the 4DCT volumes and the slow scan volumes, with a maximum of 2.9%, that can be attributed to minor differences in contouring and the ability of the 4DCT scan to adequately capture motion at the apex and base of the motion trajectory. Larger differences were observed in the patients studied, up to a maximum volume difference of 33.4%. These results demonstrate that a single 4DCT scan is not adequate to capture all thoracic motion throughout treatment.
Stochastic receding horizon control: application to an octopedal robot
NASA Astrophysics Data System (ADS)
Shah, Shridhar K.; Tanner, Herbert G.
2013-06-01
Miniature autonomous systems are being developed under ARL's Micro Autonomous Systems and Technology (MAST). These systems can only be fitted with a small-size processor, and their motion behavior is inherently uncertain due to manufacturing and platform-ground interactions. One way to capture this uncertainty is through a stochastic model. This paper deals with stochastic motion control design and implementation for MAST- specific eight-legged miniature crawling robots, which have been kinematically modeled as systems exhibiting the behavior of a Dubin's car with stochastic noise. The control design takes the form of stochastic receding horizon control, and is implemented on a Gumstix Overo Fire COM with 720 MHz processor and 512 MB RAM, weighing 5.5 g. The experimental results show the effectiveness of this control law for miniature autonomous systems perturbed by stochastic noise.
NASA Astrophysics Data System (ADS)
Rafelski, Susanne M.; Keller, Lani C.; Alberts, Jonathan B.; Marshall, Wallace F.
2011-04-01
The degree to which diffusion contributes to positioning cellular structures is an open question. Here we investigate the question of whether diffusive motion of centrin granules would allow them to interact with the mother centriole. The role of centrin granules in centriole duplication remains unclear, but some proposed functions of these granules, for example, in providing pre-assembled centriole subunits, or by acting as unstable 'pre-centrioles' that need to be captured by the mother centriole (La Terra et al 2005 J. Cell Biol. 168 713-22), require the centrin foci to reach the mother. To test whether diffusive motion could permit such interactions in the necessary time scale, we measured the motion of centrin-containing foci in living human U2OS cells. We found that these centrin foci display apparently diffusive undirected motion. Using the apparent diffusion constant obtained from these measurements, we calculated the time scale required for diffusion to capture by the mother centrioles and found that it would greatly exceed the time available in the cell cycle. We conclude that mechanisms invoking centrin foci capture by the mother, whether as a pre-centriole or as a source of components to support later assembly, would require a form of directed motility of centrin foci that has not yet been observed.
A Route to Chaotic Behavior of Single Neuron Exposed to External Electromagnetic Radiation.
Feng, Peihua; Wu, Ying; Zhang, Jiazhong
2017-01-01
Non-linear behaviors of a single neuron described by Fitzhugh-Nagumo (FHN) neuron model, with external electromagnetic radiation considered, is investigated. It is discovered that with external electromagnetic radiation in form of a cosine function, the mode selection of membrane potential occurs among periodic, quasi-periodic, and chaotic motions as increasing the frequency of external transmembrane current, which is selected as a sinusoidal function. When the frequency is small or large enough, periodic, and quasi-periodic motions are captured alternatively. Otherwise, when frequency is in interval 0.778 < ω < 2.208, chaotic motion characterizes the main behavior type. The mechanism of mode transition from quasi-periodic to chaotic motion is also observed when varying the amplitude of external electromagnetic radiation. The frequency apparently plays a more important role in determining the system behavior.
A Route to Chaotic Behavior of Single Neuron Exposed to External Electromagnetic Radiation
Feng, Peihua; Wu, Ying; Zhang, Jiazhong
2017-01-01
Non-linear behaviors of a single neuron described by Fitzhugh-Nagumo (FHN) neuron model, with external electromagnetic radiation considered, is investigated. It is discovered that with external electromagnetic radiation in form of a cosine function, the mode selection of membrane potential occurs among periodic, quasi-periodic, and chaotic motions as increasing the frequency of external transmembrane current, which is selected as a sinusoidal function. When the frequency is small or large enough, periodic, and quasi-periodic motions are captured alternatively. Otherwise, when frequency is in interval 0.778 < ω < 2.208, chaotic motion characterizes the main behavior type. The mechanism of mode transition from quasi-periodic to chaotic motion is also observed when varying the amplitude of external electromagnetic radiation. The frequency apparently plays a more important role in determining the system behavior. PMID:29089882
HelioTrope: An innovative and efficient prototype for solar power production
NASA Astrophysics Data System (ADS)
Papageorgiou, George; Maimaris, Athanasios; Hadjixenophontos, Savvas; Ioannou, Petros
2014-12-01
The solar energy alternative could provide us with all the energy we need as it exist in vast quantities all around us. We only should be innovative enough in order to improve the efficiency of our systems in capturing and converting solar energy in usable forms of power. By making a case for the solar energy alternative, we identify areas where efficiency can be improved and thereby Solar Energy can become a competitive energy source. This paper suggests an innovative approach to solar energy power production, which is manifested in a prototype given the name HelioTrope. The Heliotrope Solar Energy Production prototype is tested on its' capabilities to efficiently covert solar energy to generation of electricity and other forms of energy for storage or direct use. HelioTrope involves an innovative Stirling engine design and a parabolic concentrating dish with a sun tracking system implementing a control algorithm to maximize the capturing of solar energy. Further, it utilizes a patent developed by the authors where a mechanism is designed for the transmission of reciprocating motion of variable amplitude into unidirectional circular motion. This is employed in our prototype for converting linear reciprocating motion into circular for electricity production, which gives a significant increase in efficiency and reduces maintenance costs. Preliminary calculations indicate that the Heliotrope approach constitutes a competitive solution to solar power production.
Using a motion capture system for spatial localization of EEG electrodes
Reis, Pedro M. R.; Lochmann, Matthias
2015-01-01
Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468
Human motion retrieval from hand-drawn sketch.
Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee
2012-05-01
The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
3D Human Motion Editing and Synthesis: A Survey
Wang, Xin; Chen, Qiudi; Wang, Wanliang
2014-01-01
The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395
Dynamical spreading of small bodies in 1:1 resonance with planets by the diurnal Yarkovsky effect
NASA Astrophysics Data System (ADS)
Wang, Xuefeng; Hou, Xiyun
2017-10-01
A simple model is introduced to describe the inherent dynamics of Trojans in the presence of the diurnal Yarkovsky effect. For different spin statuses, the orbital elements of the Trojans (mainly semimajor axis, eccentricity and inclination) undergo different variations. The variation rate is generally very small, but the total variation of the semimajor axis or the orbit eccentricity over the age of the Solar system may be large enough to send small Trojans out of the regular region (or, vice versa, to capture small bodies in the regular region). In order to demonstrate the analytical analysis, we first carry out numerical simulations in a simple model, and then generalize these to two 'real' systems, namely the Sun-Jupiter system and the Sun-Earth system. In the Sun-Jupiter system, where the motion of Trojans is regular, the Yarkovsky effect gradually alters the libration width or the orbit eccentricity, forcing the Trojan to move from regular regionsto chaotic regions, where chaos may eventually cause it to escape. In the Sun-Earth system, where the motion of Trojans is generally chaotic, our limited numerical simulations indicate that the Yarkovsky effect is negligible for Trojans of 100 m in size, and even for larger ones. The Yarkovsky effect on small bodies captured in other 1:1 resonance orbits is also briefly discussed.
Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D
2009-01-01
Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.
Spin-orbit coupling for tidally evolving super-Earths
NASA Astrophysics Data System (ADS)
Rodríguez, A.; Callegari, N.; Michtchenko, T. A.; Hussmann, H.
2012-12-01
We investigate the spin behaviour of close-in rocky planets and the implications for their orbital evolution. Considering that the planet rotation evolves under simultaneous actions of the torque due to the equatorial deformation and the tidal torque, both raised by the central star, we analyse the possibility of temporary captures in spin-orbit resonances. The results of the numerical simulations of the exact equations of motions indicate that, whenever the planet rotation is trapped in a resonant motion, the orbital decay and the eccentricity damping are faster than the ones in which the rotation follows the so-called pseudo-synchronization. Analytical results obtained through the averaged equations of the spin-orbit problem show a good agreement with the numerical simulations. We apply the analysis to the cases of the recently discovered hot super-Earths Kepler-10 b, GJ 3634 b and 55 Cnc e. The simulated dynamical history of these systems indicates the possibility of capture in several spin-orbit resonances; particularly, GJ 3634 b and 55 Cnc e can currently evolve under a non-synchronous resonant motion for suitable values of the parameters. Moreover, 55 Cnc e may avoid a chaotic rotation behaviour by evolving towards synchronization through successive temporary resonant trappings.
NASA Astrophysics Data System (ADS)
Rahman, Nurul Hidayah Ab; Abdullah, Nurul Azma; Hamid, Isredza Rahmi A.; Wen, Chuah Chai; Jelani, Mohamad Shafiqur Rahman Mohd
2017-10-01
Closed-Circuit TV (CCTV) system is one of the technologies in surveillance field to solve the problem of detection and monitoring by providing extra features such as email alert or motion detection. However, detecting and alerting the admin on CCTV system may complicate due to the complexity to integrate the main program with an external Application Programming Interface (API). In this study, pixel processing algorithm is applied due to its efficiency and SMS alert is added as an alternative solution for users who opted out email alert system or have no Internet connection. A CCTV system with SMS alert (CMDSA) was developed using evolutionary prototyping methodology. The system interface was implemented using Microsoft Visual Studio while the backend components, which are database and coding, were implemented on SQLite database and C# programming language, respectively. The main modules of CMDSA are motion detection, capturing and saving video, image processing and Short Message Service (SMS) alert functions. Subsequently, the system is able to reduce the processing time making the detection process become faster, reduce the space and memory used to run the program and alerting the system admin instantly.
Micro air vehicle motion tracking and aerodynamic modeling
NASA Astrophysics Data System (ADS)
Uhlig, Daniel V.
Aerodynamic performance of small-scale fixed-wing flight is not well understood, and flight data are needed to gain a better understanding of the aerodynamics of micro air vehicles (MAVs) flying at Reynolds numbers between 10,000 and 30,000. Experimental studies have shown the aerodynamic effects of low Reynolds number flow on wings and airfoils, but the amount of work that has been conducted is not extensive and mostly limited to tests in wind and water tunnels. In addition to wind and water tunnel testing, flight characteristics of aircraft can be gathered through flight testing. The small size and low weight of MAVs prevent the use of conventional on-board instrumentation systems, but motion tracking systems that use off-board triangulation can capture flight trajectories (position and attitude) of MAVs with minimal onboard instrumentation. Because captured motion trajectories include minute noise that depends on the aircraft size, the trajectory results were verified in this work using repeatability tests. From the captured glide trajectories, the aerodynamic characteristics of five unpowered aircraft were determined. Test results for the five MAVs showed the forces and moments acting on the aircraft throughout the test flights. In addition, the airspeed, angle of attack, and sideslip angle were also determined from the trajectories. Results for low angles of attack (less than approximately 20 deg) showed the lift, drag, and moment coefficients during nominal gliding flight. For the lift curve, the results showed a linear curve until stall that was generally less than finite wing predictions. The drag curve was well described by a polar. The moment coefficients during the gliding flights were used to determine longitudinal and lateral stability derivatives. The neutral point, weather-vane stability and the dihedral effect showed some variation with different trim speeds (different angles of attack). In the gliding flights, the aerodynamic characteristics exhibited quasi-steady effects caused by small variations in the angle of attack. The quasi-steady effects, or small unsteady effects, caused variations in the aerodynamic characteristics (particularly incrementing the lift curve), and the magnitude of the influence depended on the angle-of-attack rate. In addition to nominal gliding flight, MAVs in general are capable of flying over a wide flight envelope including agile maneuvers such as perching, hovering, deep stall and maneuvering in confined spaces. From the captured motion trajectories, the aerodynamic characteristics during the numerous unsteady flights were gathered without the complexity required for unsteady wind tunnel tests. Experimental results for the MAVs show large flight envelopes that included high angles of attack (on the order of 90 deg) and high angular rates, and the aerodynamic coefficients had dynamic stall hysteresis loops and large values. From the large number of unsteady high angle-of-attack flights, an aerodynamic modeling method was developed and refined for unsteady MAV flight at high angles of attack. The method was based on a separation parameter that depended on the time history of the angle of attack and angle-of-attack rate. The separation parameter accounted for the time lag inherit in the longitudinal characteristics during dynamic maneuvers. The method was applied to three MAVs and showed general agreement with unsteady experimental results and with nominal gliding flight results. The flight tests with the MAVs indicate that modern motion tracking systems are capable of capturing the flight trajectories, and the captured trajectories can be used to determine the aerodynamic characteristics. From the captured trajectories, low Reynolds number MAV flight is explored in both nominal gliding flight and unsteady high angle-of-attack flight. Building on the experimental results, a modeling method for the longitudinal characteristics is developed that is applicable to the full flight envelope.
Animation control of surface motion capture.
Tejera, Margara; Casas, Dan; Hilton, Adrian
2013-12-01
Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space–time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.
NESDI FY10 Year in Review Report: The Case For Success 2010
2010-01-01
36 CASE STUDY: Motion Assisted Environmental Enclosure for Capturing Paint Overspray in Dry Docks...and to outline a means to assess its environmental impact. 8. Motion Assisted Environmental Enclosure for Capturing Paint Overspray in Dry Docks...in dry docks. 9. Cleaning Solvents for the 21st Century. As part of the Department of Defense’s (DoD) response to eliminating the use of volatile
Oyama, Shintaro; Shimoda, Shingo; Alnajjar, Fady S K; Iwatsuki, Katsuyuki; Hoshiyama, Minoru; Tanaka, Hirotaka; Hirata, Hitoshi
2016-01-01
Background: For mechanically reconstructing human biomechanical function, intuitive proportional control, and robustness to unexpected situations are required. Particularly, creating a functional hand prosthesis is a typical challenge in the reconstruction of lost biomechanical function. Nevertheless, currently available control algorithms are in the development phase. The most advanced algorithms for controlling multifunctional prosthesis are machine learning and pattern recognition of myoelectric signals. Despite the increase in computational speed, these methods cannot avoid the requirement of user consciousness and classified separation errors. "Tacit Learning System" is a simple but novel adaptive control strategy that can self-adapt its posture to environment changes. We introduced the strategy in the prosthesis rotation control to achieve compensatory reduction, as well as evaluated the system and its effects on the user. Methods: We conducted a non-randomized study involving eight prosthesis users to perform a bar relocation task with/without Tacit Learning System support. Hand piece and body motions were recorded continuously with goniometers, videos, and a motion-capture system. Findings: Reduction in the participants' upper extremity rotatory compensation motion was monitored during the relocation task in all participants. The estimated profile of total body energy consumption improved in five out of six participants. Interpretation: Our system rapidly accomplished nearly natural motion without unexpected errors. The Tacit Learning System not only adapts human motions but also enhances the human ability to adapt to the system quickly, while the system amplifies compensation generated by the residual limb. The concept can be extended to various situations for reconstructing lost functions that can be compensated.
Validation of an inertial measurement unit for the measurement of jump count and height.
MacDonald, Kerry; Bahr, Roald; Baltich, Jennifer; Whittaker, Jackie L; Meeuwisse, Willem H
2017-05-01
To validate the use of an inertial measurement unit (IMU) for the collection of total jump count and assess the validity of an IMU for the measurement of jump height against 3-D motion analysis. Cross sectional validation study. 3D motion-capture laboratory and field based settings. Thirteen elite adolescent volleyball players. Participants performed structured drills, played a 4 set volleyball match and performed twelve counter movement jumps. Jump counts from structured drills and match play were validated against visual count from recorded video. Jump height during the counter movement jumps was validated against concurrent 3-D motion-capture data. The IMU device captured more total jumps (1032) than visual inspection (977) during match play. During structured practice, device jump count sensitivity was strong (96.8%) while specificity was perfect (100%). The IMU underestimated jump height compared to 3D motion-capture with mean differences for maximal and submaximal jumps of 2.5 cm (95%CI: 1.3 to 3.8) and 4.1 cm (3.1-5.1), respectively. The IMU offers a valid measuring tool for jump count. Although the IMU underestimates maximal and submaximal jump height, our findings demonstrate its practical utility for field-based measurement of jump load. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolesnikov, E. K.; Chernov, S. V.
2018-05-01
A detailed study of the conditions for the realization of the phenomena of magnetic and gravity capture (MGC) of nanoparticles (NP) injected into the near-Earth space in circular orbits with altitudes and inclinations characteristic for orbits of satellites of navigation systems (GLONASS, GPS, etc.) is carried out. Spherical aluminum oxide particles with radii from 4 to 100 nm were considered as injected particles. It was assumed that injection of NP is performed at various points of circular orbits with a height of 19130 km, an inclination angle to the equatorial plane equal to 64.8 degrees and a longitude of the ascending node of 0, 120 and 240 degrees. Calculations of the motion of nanoparticles in near-Earth space were performed for conditions of low level solar and geomagnetic activity. The results of numerical experiments show that for all the considered spatial orientations of the orbit of the parent body (PB) of the NP motion in the magnetic and gravitational capture mode with extremely long orbital existence times (more than two years) can be realized only for nanoparticles with radii in the narrow gap from 8.6 to 10.2 nm.
Validation of a stereo camera system to quantify brain deformation due to breathing and pulsatility.
Faria, Carlos; Sadowsky, Ofri; Bicho, Estela; Ferrigno, Giancarlo; Joskowicz, Leo; Shoham, Moshe; Vivanti, Refael; De Momi, Elena
2014-11-01
A new stereo vision system is presented to quantify brain shift and pulsatility in open-skull neurosurgeries. The system is endowed with hardware and software synchronous image acquisition with timestamp embedding in the captured images, a brain surface oriented feature detection, and a tracking subroutine robust to occlusions and outliers. A validation experiment for the stereo vision system was conducted against a gold-standard optical tracking system, Optotrak CERTUS. A static and dynamic analysis of the stereo camera tracking error was performed tracking a customized object in different positions, orientations, linear, and angular speeds. The system is able to detect an immobile object position and orientation with a maximum error of 0.5 mm and 1.6° in all depth of field, and tracking a moving object until 3 mm/s with a median error of 0.5 mm. Three stereo video acquisitions were recorded from a patient, immediately after the craniotomy. The cortical pulsatile motion was captured and is represented in the time and frequency domain. The amplitude of motion of the cloud of features' center of mass was inferior to 0.8 mm. Three distinct peaks are identified in the fast Fourier transform analysis related to the sympathovagal balance, breathing, and blood pressure with 0.03-0.05, 0.2, and 1 Hz, respectively. The stereo vision system presented is a precise and robust system to measure brain shift and pulsatility with an accuracy superior to other reported systems.
Kwon, Young-Hoo; Como, Christopher S; Singhal, Kunal; Lee, Sangwoo; Han, Ki Hoon
2012-06-01
The purposes of this study were (1) to determine the functional swing plane (FSP) of the clubhead and the motion planes (MPs) of the shoulder/arm points and (2) to assess planarity of the golf swing based on the FSP and the MPs. The swing motions of 14 male skilled golfers (mean handicap = -0.5 +/- 2.0) using three different clubs (driver, 5-iron, and pitching wedge) were captured by an optical motion capture system (250Hz). The FSP and MPs along with their slope/relative inclination and direction/direction of inclination were obtained using a new trajectory-plane fitting method. The slope and direction of the FSP revealed a significant club effect (p < 0.001). The relative inclination and direction of inclination of the MP showed significant point (p < 0.001) and club (p < 0.001) effects and interaction (p < 0.001). Maximum deviations of the points from the FSP revealed a significant point effect (p < 0.001) and point-club interaction (p < 0.001). It was concluded that skilled golfers exhibited well-defined and consistent FSP and MPs, and the shoulder/arm points moved on vastly different MPs and exhibited large deviations from the FSP. Skilled golfers in general exhibited semi-planar downswings with two distinct phases: a transition phase and a planar execution phase.
Auditory Imagery Shapes Movement Timing and Kinematics: Evidence from a Musical Task
ERIC Educational Resources Information Center
Keller, Peter E.; Dalla Bella, Simone; Koch, Iring
2010-01-01
The role of anticipatory auditory imagery in music-like sequential action was investigated by examining timing accuracy and kinematics using a motion capture system. Musicians responded to metronomic pacing signals by producing three unpaced taps on three vertically aligned keys at the given tempo. Taps triggered tones in two out of three blocked…
ERIC Educational Resources Information Center
Dammeyer, Jesper; Koppe, Simo
2013-01-01
Research in social interaction and nonverbal communication among individuals with severe developmental disabilities also includes the study of body movements. Advances in analytical technology give new possibilities for measuring body movements more accurately and reliably. One such advance is the Qualisys Motion Capture System (QMCS), which…
Motor Impairment Evaluation for Upper Limb in Stroke Patients on the Basis of a Microsensor
ERIC Educational Resources Information Center
Huang, Shuai; Luo, Chun; Ye, Shiwei; Liu, Fei; Xie, Bin; Wang, Caifeng; Yang, Li; Huang, Zhen; Wu, Jiankang
2012-01-01
There has been an urgent need for an effective and efficient upper limb rehabilitation method for poststroke patients. We present a Micro-Sensor-based Upper Limb rehabilitation System for poststroke patients. The wearable motion capture units are attached to upper limb segments embedded in the fabric of garments. The body segment orientation…
Image-Aided Navigation Using Cooperative Binocular Stereopsis
2014-03-27
Global Postioning System . . . . . . . . . . . . . . . . . . . . . . . . . 1 IMU Inertial Measurement Unit...an intertial measurement unit ( IMU ). This technique capitalizes on an IMU’s ability to capture quick motion and the ability of GPS to constrain long...the sensor-aided IMU framework. Visual sensors provide a number of benefits, such as low cost and weight. These sensors are also able to measure
Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-01-01
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively. PMID:25207868
Motion field estimation for a dynamic scene using a 3D LiDAR.
Li, Qingquan; Zhang, Liang; Mao, Qingzhou; Zou, Qin; Zhang, Pin; Feng, Shaojun; Ochieng, Washington
2014-09-09
This paper proposes a novel motion field estimation method based on a 3D light detection and ranging (LiDAR) sensor for motion sensing for intelligent driverless vehicles and active collision avoidance systems. Unlike multiple target tracking methods, which estimate the motion state of detected targets, such as cars and pedestrians, motion field estimation regards the whole scene as a motion field in which each little element has its own motion state. Compared to multiple target tracking, segmentation errors and data association errors have much less significance in motion field estimation, making it more accurate and robust. This paper presents an intact 3D LiDAR-based motion field estimation method, including pre-processing, a theoretical framework for the motion field estimation problem and practical solutions. The 3D LiDAR measurements are first projected to small-scale polar grids, and then, after data association and Kalman filtering, the motion state of every moving grid is estimated. To reduce computing time, a fast data association algorithm is proposed. Furthermore, considering the spatial correlation of motion among neighboring grids, a novel spatial-smoothing algorithm is also presented to optimize the motion field. The experimental results using several data sets captured in different cities indicate that the proposed motion field estimation is able to run in real-time and performs robustly and effectively.
Chen, Chia-Hsiung; Azari, David; Hu, Yu Hen; Lindstrom, Mary J.; Thelen, Darryl; Yen, Thomas Y.; Radwin, Robert G.
2015-01-01
Objective Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Background Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross correlation template-matching algorithm for tracking a region of interest on the upper extremities. Methods Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. Results The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ±30 degrees. Conclusion Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ±30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task. PMID:25978764
A review of vision-based motion analysis in sport.
Barris, Sian; Button, Chris
2008-01-01
Efforts at player motion tracking have traditionally involved a range of data collection techniques from live observation to post-event video analysis where player movement patterns are manually recorded and categorized to determine performance effectiveness. Due to the considerable time required to manually collect and analyse such data, research has tended to focus only on small numbers of players within predefined playing areas. Whilst notational analysis is a convenient, practical and typically inexpensive technique, the validity and reliability of the process can vary depending on a number of factors, including how many observers are used, their experience, and the quality of their viewing perspective. Undoubtedly the application of automated tracking technology to team sports has been hampered because of inadequate video and computational facilities available at sports venues. However, the complex nature of movement inherent to many physical activities also represents a significant hurdle to overcome. Athletes tend to exhibit quick and agile movements, with many unpredictable changes in direction and also frequent collisions with other players. Each of these characteristics of player behaviour violate the assumptions of smooth movement on which computer tracking algorithms are typically based. Systems such as TRAKUS, SoccerMan, TRAKPERFORMANCE, Pfinder and Prozone all provide extrinsic feedback information to coaches and athletes. However, commercial tracking systems still require a fair amount of operator intervention to process the data after capture and are often limited by the restricted capture environments that can be used and the necessity for individuals to wear tracking devices. Whilst some online tracking systems alleviate the requirements of manual tracking, to our knowledge a completely automated system suitable for sports performance is not yet commercially available. Automatic motion tracking has been used successfully in other domains outside of elite sport performance, notably for surveillance in the military and security industry where automatic recognition of moving objects is achievable because identification of the objects is not necessary. The current challenge is to obtain appropriate video sequences that can robustly identify and label people over time, in a cluttered environment containing multiple interacting people. This problem is often compounded by the quality of video capture, the relative size and occlusion frequency of people, and also changes in illumination. Potential applications of an automated motion detection system are offered, such as: planning tactics and strategies; measuring team organisation; providing meaningful kinematic feedback; and objective measures of intervention effectiveness in team sports, which could benefit coaches, players, and sports scientists.
Human motion behavior while interacting with an industrial robot.
Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus
2012-01-01
Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.
An instrumented spatial linkage for measuring knee joint kinematics.
Rosvold, Joshua M; Atarod, Mohammad; Frank, Cyril B; Shrive, Nigel G
2016-01-01
In this study, the design and development of a highly accurate instrumented spatial linkage (ISL) for kinematic analysis of the ovine stifle joint is described. The ovine knee is a promising biomechanical model of the human knee joint. The ISL consists of six digital rotational encoders providing six degrees of freedom (6-DOF) to its motion. The ISL makes use of the complete and parametrically continuous (CPC) kinematic modeling method to describe the kinematic relationship between encoder readings and the relative positions and orientation of its two ends. The CPC method is useful when calibrating the ISL, because a small change in parameters corresponds to a small change in calculated positions and orientations and thus a smaller optimization error, compared to other kinematic models. The ISL is attached rigidly to the femur and the tibia for motion capture, and the CPC kinematic model is then employed to transform the angle sensor readings to relative motion of the two ends of the linkage, and thereby, the stifle joint motion. The positional accuracy for ISL after calibration and optimization was 0.3±0.2mm (mean +/- standard deviation). The ISL was also evaluated dynamically to ensure that accurate results were maintained, and achieved an accuracy of 0.1mm. Compared to the traditional motion capture methods, this system provides increased accuracy, reduced processing time, and ease of use. Future work will be on the application of the ISL to the ovine gait and determination of in vivo joint motions and tissue loads. Accurate measurement of knee joint kinematics is essential in understanding injury mechanisms and development of potential preventive or treatment strategies. Copyright © 2015 Elsevier B.V. All rights reserved.
Charbonnier, Caecilia; Kolo, Frank C; Duthon, Victoria B; Magnenat-Thalmann, Nadia; Becker, Christoph D; Hoffmeyer, Pierre; Menetrey, Jacques
2011-03-01
Early hip osteoarthritis in dancers could be explained by femoroacetabular impingements. However, there is a lack of validated noninvasive methods and dynamic studies to ascertain impingement during motion. Moreover, it is unknown whether the femoral head and acetabulum are congruent in typical dancing positions. The practice of some dancing movements could cause a loss of hip joint congruence and recurrent impingements, which could lead to early osteoarthritis. Descriptive laboratory study. Eleven pairs of female dancer's hips were motion captured with an optical tracking system while performing 6 different dancing movements. The resulting computed motions were applied to patient-specific hip joint 3-dimensional models based on magnetic resonance images. While visualizing the dancer's hip in motion, the authors detected impingements using computer-assisted techniques. The range of motion and congruence of the hip joint were also quantified in those 6 recorded dancing movements. The frequency of impingement and subluxation varied with the type of movement. Four dancing movements (développé à la seconde, grand écart facial, grand écart latéral, and grand plié) seem to induce significant stress in the hip joint, according to the observed high frequency of impingement and amount of subluxation. The femoroacetabular translations were high (range, 0.93 to 6.35 mm). For almost all movements, the computed zones of impingement were mainly located in the superior or posterosuperior quadrant of the acetabulum, which was relevant with respect to radiologically diagnosed damaged zones in the labrum. All dancers' hips were morphologically normal. Impingements and subluxations are frequently observed in typical ballet movements, causing cartilage hypercompression. These movements should be limited in frequency. The present study indicates that some dancing movements could damage the hip joint, which could lead to early osteoarthritis.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2017-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP) and the Behavioral Health and Performance (BHP) Element are conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within the volume. NASA needs methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods exist yet many are obtrusive and require significant post-processing. ?Examplesused in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multi-camera methods ?Due to constraints of space operations many such methods are infeasible. Inertial tracking systems typically rely upon a gravity vector to normalize sensor readings,and traditional IR systems are large and require extensive calibration. ?However, multiple technologies have not been applied to space operations for these purposes. Two of these include: 3D Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) ?Depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR)
Kinect system in home-based cardiovascular rehabilitation.
Vieira, Ágata; Gabriel, Joaquim; Melo, Cristina; Machado, Jorge
2017-01-01
Cardiovascular diseases lead to a high consumption of financial resources. An important part of the recovery process is the cardiovascular rehabilitation. This study aimed to present a new cardiovascular rehabilitation system to 11 outpatients with coronary artery disease from a Hospital in Porto, Portugal, later collecting their opinions. This system is based on a virtual reality game system, using the Kinect sensor while performing an exercise protocol which is integrated in a home-based cardiovascular rehabilitation programme, with a duration of 6 months and at the maintenance phase. The participants responded to a questionnaire asking for their opinion about the system. The results demonstrated that 91% of the participants (n = 10) enjoyed the artwork, while 100% (n = 11) agreed on the importance and usefulness of the automatic counting of the number of repetitions, moreover 64% (n = 7) reported motivation to continue performing the programme after the end of the study, and 100% (n = 11) recognized Kinect as an instrument with potential to be an asset in cardiovascular rehabilitation. Criticisms included limitations in motion capture and gesture recognition, 91% (n = 10), and the lack of home space, 27% (n = 3). According to the participants' opinions, the Kinect has the potential to be used in cardiovascular rehabilitation; however, several technical details require improvement, particularly regarding the motion capture and gesture recognition.
Optical holography applications for the zero-g Atmospheric Cloud Physics Laboratory
NASA Technical Reports Server (NTRS)
Kurtz, R. L.
1974-01-01
A complete description of holography is provided, both for the time-dependent case of moving scene holography and for the time-independent case of stationary holography. Further, a specific holographic arrangement for application to the detection of particle size distribution in an atmospheric simulation cloud chamber. In this chamber particle growth rate is investigated; therefore, the proposed holographic system must capture continuous particle motion in real time. Such a system is described.
Real-time seismic monitoring of instrumented hospital buildings
Kalkan, Erol; Fletcher, Jon Peter B.; Leith, William S.; McCarthy, William S.; Banga, Krishna
2012-01-01
In collaboration with the Department of Veterans Affairs (VA), the U.S. Geological Survey's National Strong Motion Project has recently installed sophisticated seismic monitoring systems to monitor the structural health of two hospital buildings at the Memphis VA Medical Center in Tennessee. The monitoring systems in the Bed Tower and Spinal Cord Injury buildings combine sensing technologies with an on-site computer to capture and analyze seismic performance of buildings in near-real time.
Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru
2011-06-01
Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.
Children's Understanding of Large-Scale Mapping Tasks: An Analysis of Talk, Drawings, and Gesture
ERIC Educational Resources Information Center
Kotsopoulos, Donna; Cordy, Michelle; Langemeyer, Melanie
2015-01-01
This research examined how children represent motion in large-scale mapping tasks that we referred to as "motion maps". The underlying mathematical content was transformational geometry. In total, 19 children, 8- to 10-year-old, created motion maps and captured their motion maps with accompanying verbal description digitally. Analysis of…
Multimodal transport and dispersion of organelles in narrow tubular cells
NASA Astrophysics Data System (ADS)
Mogre, Saurabh S.; Koslover, Elena F.
2018-04-01
Intracellular components explore the cytoplasm via active motor-driven transport in conjunction with passive diffusion. We model the motion of organelles in narrow tubular cells using analytical techniques and numerical simulations to study the efficiency of different transport modes in achieving various cellular objectives. Our model describes length and time scales over which each transport mode dominates organelle motion, along with various metrics to quantify exploration of intracellular space. For organelles that search for a specific target, we obtain the average capture time for given transport parameters and show that diffusion and active motion contribute to target capture in the biologically relevant regime. Because many organelles have been found to tether to microtubules when not engaged in active motion, we study the interplay between immobilization due to tethering and increased probability of active transport. We derive parameter-dependent conditions under which tethering enhances long-range transport and improves the target capture time. These results shed light on the optimization of intracellular transport machinery and provide experimentally testable predictions for the effects of transport regulation mechanisms such as tethering.
Motion cues that make an impression: Predicting perceived personality by minimal motion information.
Koppensteiner, Markus
2013-11-01
The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Y; Rahimi, A; Sawant, A
Purpose: Active breathing control (ABC) has been used to reduce treatment margin due to respiratory organ motion by enforcing temporary breath-holds. However, in practice, even if the ABC device indicates constant lung volume during breath-hold, the patient may still exhibit minor chest motion. Consequently, therapists are given a false sense of security that the patient is immobilized. This study aims at quantifying such motion during ABC breath-holds by monitoring the patient chest motion using a surface photogrammetry system, VisionRT. Methods: A female patient with breast cancer was selected to evaluate chest motion during ABC breath-holds. During the entire course ofmore » treatment, the patient’s chest surface was monitored by a surface photogrammetry system, VisionRT. Specifically, a user-defined region-of-interest (ROI) on the chest surface was selected for the system to track at a rate of ∼3Hz. The surface motion was estimated by rigid image registration between the current ROI image captured and a reference image. The translational and rotational displacements computed were saved in a log file. Results: A total of 20 fractions of radiation treatment were monitored by VisionRT. After removing noisy data, we obtained chest motion of 79 breath-hold sessions. Mean chest motion in AP direction during breath-holds is 1.31mm with 0.62mm standard deviation. Of the 79 sessions, the patient exhibited motion ranging from 0–1 mm (30 sessions), 1–2 mm (37 sessions), 2–3 mm (11 sessions) and >3 mm (1 session). Conclusion: Contrary to popular assumptions, the patient is not completely still during ABC breath-hold sessions. In this particular case studied, the patient exhibited chest motion over 2mm in 14 out of 79 breath-holds. Underestimating treatment margin for radiation therapy with ABC could reduce treatment effectiveness due to geometric miss or overdose of critical organs. The senior author receives research funding from NIH, VisionRT, Varian Medical Systems and Elekta.« less
Designing berthing mechanisms for international compatibility
NASA Technical Reports Server (NTRS)
Winch, John; Gonzalez-Vallejo, Juan J.
1991-01-01
The paper examines the technological issues regarding common berthing interfaces for the Space Station Freedom and pressurized modules from U.S., European, and Japanese space programs. The development of the common berthing mechanism (CBM) is based on common requirements concerning specifications, launch environments, and the unique requirements of ESA's Man-Tended Free Flyer. The berthing mechanism is composed of an active and a passive half, a remote manipulator system, 4 capture-latch assemblies, 16 structural bolts, and a pressure gage to verify equalization. Extensive graphic and verbal descriptions of each element are presented emphasizing the capture-latch motion and powered-bolt operation. The support systems to complete the interface are listed, and the manufacturing requirements for consistent fabrication are discussed to ensure effective international development.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
The Role of Near-Fault Relief in Creating and Maintaining Strike-Slip Landscape Features
NASA Astrophysics Data System (ADS)
Harbert, S.; Duvall, A. R.; Tucker, G. E.
2016-12-01
Geomorphic landforms, such as shutter ridges, offset river terraces, and deflected stream channels, are often used to assess the activity and slip rates of strike-slip faults. However, in some systems, such as parts of the Marlborough Fault System (South Island, NZ), an active strike-slip fault does not leave a strong landscape signature. Here we explore the factors that dampen or enhance the landscape signature of strike-slip faulting using the Channel-Hillslope Integrated Landscape Development model (CHILD). We focus on variables affecting the length of channel offsets, which enhance the signature of strike-slip motion, and the frequency of stream captures, which eliminate offsets and reduce this signature. We model a strike-slip fault that passes through a mountain ridge, offsetting streams that drain across this fault. We use this setup to test the response of channel offset length and capture frequency to fault characteristics, such as slip rate and ratio of lateral to vertical motion, and to landscape characteristics, such as relief contrasts controlled by erodibility. Our experiments show that relief downhill of the fault, whether generated by differential uplift across the fault or by an erodibility contrast, has the strongest effect on offset length and capture frequency. This relief creates shutter ridges, which block and divert streams while being advected along a fault. Shutter ridges and the streams they divert have long been recognized as markers of strike-slip motion. Our results show specifically that the height of shutter ridges is most responsible for the degree to which they create long channel offsets by preventing stream captures. We compare these results to landscape metrics in the Marlborough Fault System, where shutter ridges are common and often lithologically controlled. We compare shutter ridge length and height to channel offset length in order to assess the influence of relief on offset channel features in a real landscape. Based on our model and field results, we conclude that vertical relief is important for generating and preserving offset features that are viewed as characteristic of a strike-slip fault. Therefore, the geomorphic expression of a fault may be dependent on characteristics of the surrounding landscape rather than primarily a function of the nature of slip on the fault.
A respiratory compensating system: design and performance evaluation.
Chuang, Ho-Chiao; Huang, Ding-Yang; Tien, Der-Chi; Wu, Ren-Hong; Hsu, Chung-Hsien
2014-05-08
This study proposes a respiratory compensating system which is mounted on the top of the treatment couch for reverse motion, opposite from the direction of the targets (diaphragm and hemostatic clip), in order to offset organ displacement generated by respiratory motion. Traditionally, in the treatment of cancer patients, doctors must increase the field size for radiation therapy of tumors because organs move with respiratory motion, which causes radiation-induced inflammation on the normal tissues (organ at risk (OAR)) while killing cancer cells, and thereby reducing the patient's quality of life. This study uses a strain gauge as a respiratory signal capture device to obtain abdomen respiratory signals, a proposed respiratory simulation system (RSS) and respiratory compensating system to experiment how to offset the organ displacement caused by respiratory movement and compensation effect. This study verifies the effect of the respiratory compensating system in offsetting the target displacement using two methods. The first method uses linac (medical linear accelerator) to irradiate a 300 cGy dose on the EBT film (GAFCHROMIC EBT film). The second method uses a strain gauge to capture the patients' respiratory signals, while using fluoroscopy to observe in vivo targets, such as a diaphragm, to enable the respiratory compensating system to offset the displacements of targets in superior-inferior (SI) direction. Testing results show that the RSS position error is approximately 0.45 ~ 1.42 mm, while the respiratory compensating system position error is approximately 0.48 ~ 1.42 mm. From the EBT film profiles based on different input to the RSS, the results suggest that when the input respiratory signals of RSS are sine wave signals, the average dose (%) in the target area is improved by 1.4% ~ 24.4%, and improved in the 95% isodose area by 15.3% ~ 76.9% after compensation. If the respiratory signals input into the RSS respiratory signals are actual human respiratory signals, the average dose (%) in the target area is improved by 31.8% ~ 67.7%, and improved in the 95% isodose area by 15.3% ~ 86.4% (the above rates of improvements will increase with increasing respiratory motion displacement) after compensation. The experimental results from the second method suggested that about 67.3% ~ 82.5% displacement can be offset. In addition, gamma passing rate after compensation can be improved to 100% only when the displacement of the respiratory motion is within 10 ~ 30 mm. This study proves that the proposed system can contribute to the compensation of organ displacement caused by respiratory motion, enabling physicians to use lower doses and smaller field sizes in the treatment of tumors of cancer patients.
A respiratory compensating system: design and performance evaluation
Huang, Ding‐Yang; Tien, Der‐Chi; Wu, Ren‐Hong; Hsu, Chung‐Hsien
2014-01-01
This study proposes a respiratory compensating system which is mounted on the top of the treatment couch for reverse motion, opposite from the direction of the targets (diaphragm and hemostatic clip), in order to offset organ displacement generated by respiratory motion. Traditionally, in the treatment of cancer patients, doctors must increase the field size for radiation therapy of tumors because organs move with respiratory motion, which causes radiation‐induced inflammation on the normal tissues (organ at risk (OAR)) while killing cancer cells, and thereby reducing the patient's quality of life. This study uses a strain gauge as a respiratory signal capture device to obtain abdomen respiratory signals, a proposed respiratory simulation system (RSS) and respiratory compensating system to experiment how to offset the organ displacement caused by respiratory movement and compensation effect. This study verifies the effect of the respiratory compensating system in offsetting the target displacement using two methods. The first method uses linac (medical linear accelerator) to irradiate a 300 cGy dose on the EBT film (GAFCHROMIC EBT film). The second method uses a strain gauge to capture the patients' respiratory signals, while using fluoroscopy to observe in vivo targets, such as a diaphragm, to enable the respiratory compensating system to offset the displacements of targets in superior‐inferior (SI) direction. Testing results show that the RSS position error is approximately 0.45 ~ 1.42 mm, while the respiratory compensating system position error is approximately 0.48 ~ 1.42 mm. From the EBT film profiles based on different input to the RSS, the results suggest that when the input respiratory signals of RSS are sine wave signals, the average dose (%) in the target area is improved by 1.4% ~ 24.4%, and improved in the 95% isodose area by 15.3% ~ 76.9% after compensation. If the respiratory signals input into the RSS respiratory signals are actual human respiratory signals, the average dose (%) in the target area is improved by 31.8% ~ 67.7%, and improved in the 95% isodose area by 15.3% ~ 86.4% (the above rates of improvements will increase with increasing respiratory motion displacement) after compensation. The experimental results from the second method suggested that about 67.3% ~ 82.5% displacement can be offset. In addition, gamma passing rate after compensation can be improved to 100% only when the displacement of the respiratory motion is within 10 ~ 30 mm. This study proves that the proposed system can contribute to the compensation of organ displacement caused by respiratory motion, enabling physicians to use lower doses and smaller field sizes in the treatment of tumors of cancer patients. PACS number: 87.19. Wx; 87.55. Km PMID:24892345
Flocking and self-defense: experiments and simulations of avian mobbing
NASA Astrophysics Data System (ADS)
Kane, Suzanne Amador
2011-03-01
We have performed motion capture studies in the field of avian mobbing, in which flocks of prey birds harass predatory birds. Our empirical studies cover both field observations of mobbing occurring in mid-air, where both predator and prey are in flight, and an experimental system using actual prey birds and simulated predator ``perch and wait'' strategies. To model our results and establish the effectiveness of mobbing flight paths at minimizing risk of capture while optimizing predator harassment, we have performed computer simulations using the actual measured trajectories of mobbing prey birds combined with model predator trajectories. To accurately simulate predator motion, we also measured raptor acceleration and flight dynamics, well as prey-pursuit strategies. These experiments and theoretical studies were all performed with undergraduate research assistants in a liberal arts college setting. This work illustrates how biological physics provides undergraduate research projects well-suited to the abilities of physics majors with interdisciplinary science interests and diverse backgrounds.
Combining EEG, MIDI, and motion capture techniques for investigating musical performance.
Maidhof, Clemens; Kästner, Torsten; Makkonen, Tommi
2014-03-01
This article describes a setup for the simultaneous recording of electrophysiological data (EEG), musical data (MIDI), and three-dimensional movement data. Previously, each of these three different kinds of measurements, conducted sequentially, has been proven to provide important information about different aspects of music performance as an example of a demanding multisensory motor skill. With the method described here, it is possible to record brain-related activity and movement data simultaneously, with accurate timing resolution and at relatively low costs. EEG and MIDI data were synchronized with a modified version of the FTAP software, sending synchronization signals to the EEG recording device simultaneously with keypress events. Similarly, a motion capture system sent synchronization signals simultaneously with each recorded frame. The setup can be used for studies investigating cognitive and motor processes during music performance and music-like tasks--for example, in the domains of motor control, learning, music therapy, or musical emotions. Thus, this setup offers a promising possibility of a more behaviorally driven analysis of brain activity.
Flexcam Image Capture Viewing and Spot Tracking
NASA Technical Reports Server (NTRS)
Rao, Shanti
2008-01-01
Flexcam software was designed to allow continuous monitoring of the mechanical deformation of the telescope structure at Palomar Observatory. Flexcam allows the user to watch the motion of a star with a low-cost astronomical camera, to measure the motion of the star on the image plane, and to feed this data back into the telescope s control system. This automatic interaction between the camera and a user interface facilitates integration and testing. Flexcam is a CCD image capture and analysis tool for the ST-402 camera from Santa Barbara Instruments Group (SBIG). This program will automatically take a dark exposure and then continuously display corrected images. The image size, bit depth, magnification, exposure time, resolution, and filter are always displayed on the title bar. Flexcam locates the brightest pixel and then computes the centroid position of the pixels falling in a box around that pixel. This tool continuously writes the centroid position to a network file that can be used by other instruments.
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
NASA Astrophysics Data System (ADS)
Liu, Dalong; Ballard, John R.; Haritonova, Alyona; Choi, Jeungwan; Bischof, John; Ebbini, Emad S.
2012-10-01
An integrated system employing real-time ultrasound thermography and strain imaging in monitoring tissue response to phased-array heating patterns has been developed. The imaging system is implemented on a commercially available scanner (SonixRP) at frame rates > 500 fps with limited frame sizes covering the vicinity of the HIFU focal spot. These frame rates are sufficient to capture tissue motion and deformation even in the vicinity of large arteries. With the high temporal and spatial resolution of our strain imaging system, we are able to capture and separate tissue strains due to natural motion (breathing and pulsation) from HIFU induced strains (thermal and mechanical). We have collected in vivo strain imaging during sub-therapeutic and therapeutic HIFU exposure in swine and rat model. A 3.5-MHz phased array was used to generate sinusoidally-modulated pHIFU beams at different intensity levels and durations near blood vessels of different sizes (e.g. femoral in the swine and rat models). The results show that our approach is capable of characterizing the thermal and mechanical tissue response to sub-therapeutic pHIFU beam. For therapeutic pHIFU beams, the approach is still capable of localizing the therapeutic beam, but the results at the focal spot are complicated by bubble generation.
Branson, B G; Abnos, R M; Simmer-Beck, M L; King, G W; Siddicky, S F
2018-01-01
Motion analysis has great potential for quantitatively evaluating dental operator posture and the impact of interventions such as magnification loupes on posture and subsequent development of musculoskeletal disorders. This study sought to determine the feasibility of motion capture technology for measurement of dental operator posture and examine the impact that different styles of magnification loupes had on dental operator posture. Forward and lateral head flexion were measured for two different operators while completing a periodontal probing procedure. Each was measured while wearing magnification loupes (flip up-FL and through the lens-TTL) and basic safety lenses. Operators both exhibited reduced forward flexion range of motion (ROM) when using loupes (TTL or FL) compared to a baseline lens (BL). In contrast to forward flexion, no consistent trends were observed for lateral flexion between subjects. The researchers can report that it is possible to measure dental operator posture using motion capture technology. More study is needed to determine which type of magnification loupes (FL or TTL) are superior in improving dental operator posture. Some evidence was found supporting that the quality of operator posture may more likely be related to the use of magnification loupes, rather than the specific type of lenses worn.
Spherical Coordinate Systems for Streamlining Suited Mobility Analysis
NASA Technical Reports Server (NTRS)
Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Rajulu. Sudhakar
2015-01-01
Introduction: When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. It has been shown that using a spherical coordinate system allows Anthropometry and Biomechanics Facility (ABF) personnel to increase their ability to transmit important human mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project was to use innovative analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify a new method before it was implemented in the ABF's data analysis practices. A mechanical test rig was built and tracked in 3D using an optical motion capture system. Its position and orientation were reported in both Euler and spherical reference systems. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder to include the rest of the joints of the body. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. These visualization methods will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development. Results: Initial results demonstrated that a spherical coordinate system is helpful in describing and visualizing the motion of a space suit. The system is particularly useful in describing the motion of the shoulder, where multiple degrees of freedom can lead to very complex motion paths.
Hwang, Alex D.; Peli, Eli
2014-01-01
Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms. PMID:26034562
Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions
NASA Astrophysics Data System (ADS)
Khoury, Mehdi; Liu, Honghai
This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.
Estimation of Ground Reaction Forces and Moments During Gait Using Only Inertial Motion Capture
Karatsidis, Angelos; Bellusci, Giovanni; Schepers, H. Martin; de Zee, Mark; Andersen, Michael S.; Veltink, Peter H.
2016-01-01
Ground reaction forces and moments (GRF&M) are important measures used as input in biomechanical analysis to estimate joint kinetics, which often are used to infer information for many musculoskeletal diseases. Their assessment is conventionally achieved using laboratory-based equipment that cannot be applied in daily life monitoring. In this study, we propose a method to predict GRF&M during walking, using exclusively kinematic information from fully-ambulatory inertial motion capture (IMC). From the equations of motion, we derive the total external forces and moments. Then, we solve the indeterminacy problem during double stance using a distribution algorithm based on a smooth transition assumption. The agreement between the IMC-predicted and reference GRF&M was categorized over normal walking speed as excellent for the vertical (ρ = 0.992, rRMSE = 5.3%), anterior (ρ = 0.965, rRMSE = 9.4%) and sagittal (ρ = 0.933, rRMSE = 12.4%) GRF&M components and as strong for the lateral (ρ = 0.862, rRMSE = 13.1%), frontal (ρ = 0.710, rRMSE = 29.6%), and transverse GRF&M (ρ = 0.826, rRMSE = 18.2%). Sensitivity analysis was performed on the effect of the cut-off frequency used in the filtering of the input kinematics, as well as the threshold velocities for the gait event detection algorithm. This study was the first to use only inertial motion capture to estimate 3D GRF&M during gait, providing comparable accuracy with optical motion capture prediction. This approach enables applications that require estimation of the kinetics during walking outside the gait laboratory. PMID:28042857
Post-capture vibration suppression of spacecraft via a bio-inspired isolation system
NASA Astrophysics Data System (ADS)
Dai, Honghua; Jing, Xingjian; Wang, Yu; Yue, Xiaokui; Yuan, Jianping
2018-05-01
Inspired by the smooth motions of a running kangaroo, a bio-inspired quadrilateral shape (BIQS) structure is proposed to suppress the vibrations of a free-floating spacecraft subject to periodic or impulsive forces, which may be encountered during on-orbit servicing missions. In particular, the BIQS structure is installed between the satellite platform and the capture mechanism. The dynamical model of the BIQS isolation system, i.e. a BIQS structure connecting the platform and the capture mechanism at each side, is established by Lagrange's equations to simulate the post-capture dynamical responses. The BIQS system suffering an impulsive force is dealt with by means of a modified version of Lagrange's equations. Furthermore, the classical harmonic balance method is used to solve the nonlinear dynamical system subject to periodic forces, while for the case under impulsive forces the numerical integration method is adopted. Due to the weightless environment in space, the present BIQS system is essentially an under-constrained dynamical system with one of its natural frequencies being identical to zero. The effects of system parameters, such as the number of layers in BIQS, stiffness, assembly angle, rod length, damping coefficient, masses of satellite platform and capture mechanism, on the isolation performance of the present system are thoroughly investigated. In addition, comparisons between the isolation performances of the presently proposed BIQS isolator and the conventional spring-mass-damper (SMD) isolator are conducted to demonstrate the advantages of the present isolator. Numerical simulations show that the BIQS system has a much better performance than the SMD system under either periodic or impulsive forces. Overall, the present BIQS isolator offers a highly efficient passive way for vibration suppressions of free-floating spacecraft.
Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Fan, Rukun; Geng, Weidong
We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less
Scaled Jump in Gravity-Reduced Virtual Environments.
Kim, MyoungGon; Cho, Sunglk; Tran, Tanh Quang; Kim, Seong-Pil; Kwon, Ohung; Han, JungHyun
2017-04-01
The reduced gravity experienced in lunar or Martian surfaces can be simulated on the earth using a cable-driven system, where the cable lifts a person to reduce his or her weight. This paper presents a novel cable-driven system designed for the purpose. It is integrated with a head-mounted display and a motion capture system. Focusing on jump motion within the system, this paper proposes to scale the jump and reports the experiments made for quantifying the extent to which a jump can be scaled without the discrepancy between physical and virtual jumps being noticed by the user. With the tolerable range of scaling computed from these experiments, an application named retargeted jump is developed, where a user can jump up onto virtual objects while physically jumping in the real-world flat floor. The core techniques presented in this paper can be extended to develop extreme-sport simulators such as parasailing and skydiving.
Multi-modal gesture recognition using integrated model of motion, audio and video
NASA Astrophysics Data System (ADS)
Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko
2015-07-01
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid.
Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko
2016-03-08
Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film-based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers' abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one-dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers' breathing patterns, the mean tracking error range was 0.78-1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient.
NASA Astrophysics Data System (ADS)
Xie, Pingping; Joyce, Robert; Wu, Shaorong
2015-04-01
As reported at the EGU General Assembly of 2014, a prototype system was developed for the second generation CMORPH to produce global analyses of 30-min precipitation on a 0.05olat/lon grid over the entire globe from pole to pole through integration of information from satellite observations as well as numerical model simulations. The second generation CMORPH is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) as well as LEO platforms, and precipitation simulations from numerical global models. Key to the success of the 2nd generation CMORPH, among a couple of other elements, are the development of a LEO-IR based precipitation estimation to fill in the polar gaps and objectively analyzed cloud motion vectors to capture the cloud movements of various spatial scales over the entire globe. In this presentation, we report our recent work on the refinement for these two important algorithm components. The prototype algorithm for the LEO IR precipitation estimation is refined to achieve improved quantitative accuracy and consistency with PMW retrievals. AVHRR IR TBB data from all LEO satellites are first remapped to a 0.05olat/lon grid over the entire globe and in a 30-min interval. Temporally and spatially co-located data pairs of the LEO TBB and inter-calibrated combined satellite PMW retrievals (MWCOMB) are then collected to construct tables. Precipitation at a grid box is derived from the TBB through matching the PDF tables for the TBB and the MWCOMB. This procedure is implemented for different season, latitude band and underlying surface types to account for the variations in the cloud - precipitation relationship. At the meantime, a sub-system is developed to construct analyzed fields of cloud motion vectors from the GEO/LEO IR based precipitation estimates and the CFS Reanalysis (CFSR) precipitation fields. Motion vectors are first derived separately from the satellite IR based precipitation estimates and the CFSR precipitation fields. These individually derived motion vectors are then combined through a 2D-VAR technique to form an analyzed field of cloud motion vectors over the entire globe. Error function is experimented to best reflect the performance of the satellite IR based estimates and the CFSR in capturing the movements of precipitating cloud systems over different regions and for different seasons. Quantitative experiments are conducted to optimize the LEO IR based precipitation estimation technique and the 2D-VAR based motion vector analysis system. Detailed results will be reported at the EGU.
Motion dazzle and camouflage as distinct anti-predator defenses.
Stevens, Martin; Searle, W Tom L; Seymour, Jenny E; Marshall, Kate L A; Ruxton, Graeme D
2011-11-25
Camouflage patterns that hinder detection and/or recognition by antagonists are widely studied in both human and animal contexts. Patterns of contrasting stripes that purportedly degrade an observer's ability to judge the speed and direction of moving prey ('motion dazzle') are, however, rarely investigated. This is despite motion dazzle having been fundamental to the appearance of warships in both world wars and often postulated as the selective agent leading to repeated patterns on many animals (such as zebra and many fish, snake, and invertebrate species). Such patterns often appear conspicuous, suggesting that protection while moving by motion dazzle might impair camouflage when stationary. However, the relationship between motion dazzle and camouflage is unclear because disruptive camouflage relies on high-contrast markings. In this study, we used a computer game with human subjects detecting and capturing either moving or stationary targets with different patterns, in order to provide the first empirical exploration of the interaction of these two protective coloration mechanisms. Moving targets with stripes were caught significantly less often and missed more often than targets with camouflage patterns. However, when stationary, targets with camouflage markings were captured less often and caused more false detections than those with striped patterns, which were readily detected. Our study provides the clearest evidence to date that some patterns inhibit the capture of moving targets, but that camouflage and motion dazzle are not complementary strategies. Therefore, the specific coloration that evolves in animals will depend on how the life history and ontogeny of each species influence the trade-off between the costs and benefits of motion dazzle and camouflage.
Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.
Gonzalez, David; Nouredanesh, Mina; Tung, James
2018-01-01
Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064
A Marker-less Monitoring System for Movement Analysis of Infants Using Video Images
NASA Astrophysics Data System (ADS)
Shima, Keisuke; Osawa, Yuko; Bu, Nan; Tsuji, Tokuo; Tsuji, Toshio; Ishii, Idaku; Matsuda, Hiroshi; Orito, Kensuke; Ikeda, Tomoaki; Noda, Shunichi
This paper proposes a marker-less motion measurement and analysis system for infants. This system calculates eight types of evaluation indices related to the movement of an infant such as “amount of body motion” and “activity of body” from binary images that are extracted from video images using the background difference and frame difference. Thus, medical doctors can intuitively understand the movements of infants without long-term observations, and this may be helpful in supporting their diagnoses and detecting disabilities and diseases in the early stages. The distinctive feature of this system is that the movements of infants can be measured without using any markers for motion capture and thus it is expected that the natural and inherent tendencies of infants can be analyzed and evaluated. In this paper, the evaluation indices and features of movements between full-term infants (FTIs) and low birth weight infants (LBWIs) are compared using the developed prototype. We found that the amount of body motion and symmetry of upper and lower body movements of LBWIs became lower than those of FTIs. The difference between the movements of FTIs and LBWIs can be evaluated using the proposed system.
Relationships between clubshaft motions and clubface orientation during the golf swing.
Takagi, Tokio; Yokozawa, Toshiharu; Inaba, Yuki; Matsuda, Yuji; Shiraki, Hitoshi
2017-09-01
Since clubface orientation at impact affects ball direction and ball spin, the ability to control clubface orientation is one of the most important skills for golfers. This study presents a new method to describe clubface orientation as a function of the clubshaft motions (i.e., swing plane orientation, clubshaft angle in the swing plane, and clubshaft rolling angle) during a golf swing and investigates the relationships between the clubshaft motions and clubface orientation at impact. The club motion data of driver shots were collected from eight skilled golfers using a three-dimensional motion capture system. The degrees of influence of the clubshaft motions on the clubface orientation were investigated using sensitivity analysis. The sensitivity analysis revealed that the swing plane horizontal angle affected the clubface horizontal angle to an extent of 100%, that the clubshaft angle in the swing plane affected both the clubface vertical and horizontal angles to extents of 74 and 68%, respectively, and that the clubshaft rolling angle affected both the clubface vertical and horizontal angles to extents of -67 and 75%, respectively. Since the method presented here relates clubface orientation to clubshaft motions, it is useful for understanding the clubface control of a golfer.
Multibody dynamics driving GNC and system design in tethered nets for active debris removal
NASA Astrophysics Data System (ADS)
Benvenuto, Riccardo; Lavagna, Michèle; Salvi, Samuele
2016-07-01
Debris removal in Earth orbits is an urgent issue to be faced for space exploitation durability. Among different techniques, tethered-nets present appealing benefits and some open points to fix. Former and latter are discussed in the paper, supported by the exploitation of a multibody dynamics tool. With respect to other proposed capture mechanisms, tethered-net solutions are characterised by a safer capturing distance, a passive angular momentum damping effect and the highest flexibility to unknown shape, material and attitude of the target to interface with. They also allow not considering the centre of gravity alignment with thrust axis as a constraint, as it is for any rigid link solution. Furthermore, the introduction of a closing thread around the net perimeter ensures safer and more reliable grasping and holding. In the paper, a six degrees of freedom multibody dynamics simulator is presented: it was developed at Politecnico di Milano - Department of Aerospace Science and Technologies - and it is able to describe the orbital and attitude dynamics of tethered-nets systems and end-bodies during different phases, with great flexibility in dealing with different topologies and configurations. Critical phases as impact and wrapping are analysed by simulation to address the tethered-stack controllability. It is shown how the role of contact modelling is fundamental to describe the coupled dynamics: it is demonstrated, as a major novel contribution, how friction between the net and a tumbling target allows reducing its angular motion, stabilizing the system and allowing safer towing operations. Moreover, the so-called tethered space tug is analysed: after capture, the two objects, one passive and one active, are connected by the tethered-net flexible link, the motion of the system being excited by the active spacecraft thrusters. The critical modes prevention during this phase, by means of a closed-loop control synthesis is shown. Finally, the connection between flexible dynamics and capture system design is highlighted, giving engineering answers to most challenging open points to lead to a ready to flight solution.
Soft Snakes: Construction, Locomotion, and Control
NASA Astrophysics Data System (ADS)
Branyan, Callie; Courier, Taylor; Fleming, Chloe; Remaley, Jacquelin; Hatton, Ross; Menguc, Yigit
We fabricated modular bidirectional silicone pneumatic actuators to build a soft snake robot, applying geometric models of serpenoid swimmers to identify theoretically optimal gaits to realize serpentine locomotion. With the introduction of magnetic connections and elliptical cross-sections in fiber-reinforced modules, we can vary the number of continuum segments in the snake body to achieve more supple serpentine motion in a granular media. The performance of these gaits is observed using a motion capture system and efficiency is assessed in terms of pressure input and net displacement. These gaits are optimized using our geometric soap-bubble method of gait optimization, demonstrating the applicability of this tool to soft robot control and coordination.
Shultz, Rebecca; Jenkyn, Thomas
2012-01-01
Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.
Identifying compensatory movement patterns in the upper extremity using a wearable sensor system.
Ranganathan, Rajiv; Wang, Rui; Dong, Bo; Biswas, Subir
2017-11-30
Movement impairments such as those due to stroke often result in the nervous system adopting atypical movements to compensate for movement deficits. Monitoring these compensatory patterns is critical for improving functional outcomes during rehabilitation. The purpose of this study was to test the feasibility and validity of a wearable sensor system for detecting compensatory trunk kinematics during activities of daily living. Participants with no history of neurological impairments performed reaching and manipulation tasks with their upper extremity, and their movements were recorded by a wearable sensor system and validated using a motion capture system. Compensatory movements of the trunk were induced using a brace that limited range of motion at the elbow. Our results showed that the elbow brace elicited compensatory movements of the trunk during reaching tasks but not manipulation tasks, and that a wearable sensor system with two sensors could reliably classify compensatory movements (~90% accuracy). These results show the potential of the wearable system to assess and monitor compensatory movements outside of a lab setting.
Ward, Brodie J; Thornton, Ashleigh; Lay, Brendan; Rosenberg, Michael
2017-01-01
Fundamental movement skill (FMS) assessment remains an important tool in classifying individuals' level of FMS proficiency. The collection of FMS performances for assessment and monitoring has remained unchanged over the last few decades, but new motion capture technologies offer opportunities to automate this process. To achieve this, a greater understanding of the human process of movement skill assessment is required. The authors present the rationale and protocols of a project in which they aim to investigate the visual search patterns and information extraction employed by human assessors during FMS assessment, as well as the implementation of the Kinect system for FMS capture.
Post-Newtonian Circular Restricted 3-Body Problem: Schwarzschild primaries
NASA Astrophysics Data System (ADS)
Dubeibe, F. L.; Lora-Clavijo, F. D.; González, G. A.
2017-07-01
The restricted three-body problem (RTBP) has been extensively studied to investigate the stability of the solar system, extra-solar subsystems, asteroid capture, and the dynamics of two massive black holes orbited by a sun. In the present work, we study the stability of the planar circular restricted three-body problem in the context of post-Newtonian approximations. First of all, we review the results obtained from the post-Newtonian equations of motion calculated in the framework of the Einstein-Infeld-Hoffmann formalism (EIH). Therefore, using the Fodor-Hoenselers-Perjes formalism (FHP), we have performed an expansion of the gravitational potential for two primaries, deriving a new system of equations of motion, which unlike the EIH-approach, preserves the Jacobian integral of motion. Additionally, we have obtained approximate expressions for the Lagrange points in terms of a mass parameter μ, where it is found that the deviations from the classical regime are larger for the FHP than for the EIH equations.
Chaotic behavior in the locomotion of Amoeba proteus.
Miyoshi, H; Kagawa, Y; Tsuchiya, Y
2001-01-01
The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Y; Zhu, X; Zheng, D
Purpose: Tracking the surrogate placed on patient skin surface sometimes leads to problematic signals for certain patients, such as shallow breathers. This in turn impairs the 4D CT image quality and dosimetric accuracy. In this pilot study, we explored the feasibility of monitoring human breathing motion by integrating breathing sound signal with surface surrogates. Methods: The breathing sound signals were acquired though a microphone attached adjacently to volunteer’s nostrils, and breathing curve were analyzed using a low pass filter. Simultaneously, the Real-time Position Management™ (RPM) system from Varian were employed on a volunteer to monitor respiratory motion including both shallowmore » and deep breath modes. The similar experiment was performed by using Calypso system, and three beacons taped on volunteer abdominal region to capture breath motion. The period of each breathing curves were calculated with autocorrelation functions. The coherence and consistency between breathing signals using different acquisition methods were examined. Results: Clear breathing patterns were revealed by the sound signal which was coherent with the signal obtained from both the RPM system and Calypso system. For shallow breathing, the periods of breathing cycle were 3.00±0.19 sec (sound) and 3.00±0.21 sec (RPM); For deep breathing, the periods were 3.49± 0.11 sec (sound) and 3.49±0.12 sec (RPM). Compared with 4.54±0.66 sec period recorded by the calypso system, the sound measured 4.64±0.54 sec. The additional signal from sound could be supplement to the surface monitoring, and provide new parameters to model the hysteresis lung motion. Conclusion: Our preliminary study shows that the breathing sound signal can provide a comparable way as the RPM system to evaluate the respiratory motion. It’s instantaneous and robust characteristics facilitate it possibly to be a either independently or as auxiliary methods to manage respiratory motion in radiotherapy.« less
Dimbwadyo-Terrer, Iris; Trincado-Alonso, Fernando; de Los Reyes-Guzmán, Ana; Aznar, Miguel A; Alcubilla, Cesar; Pérez-Nombela, Soraya; Del Ama-Espinosa, Antonio; Polonio-López, Begoña; Gil-Agudo, Ángel
2016-08-01
Purpose state: The aim of this preliminary study was to test a data glove, CyberTouch™, combined with a virtual reality (VR) environment, for using in therapeutic training of reaching movements after spinal cord injury (SCI). Nine patients with thoracic SCI were selected to perform a pilot study by comparing two treatments: patients in the intervention group (IG) conducted a VR training based on the use of a data glove, CyberTouch™ for 2 weeks, while patients in the control group (CG) only underwent the traditional rehabilitation. Furthermore, two functional parameters were implemented in order to assess patient's performance of the sessions: normalized trajectory lengths and repeatability. Although no statistical significance was found, the data glove group seemed to obtain clinical changes in the muscle balance (MB) and functional parameters, and in the dexterity, coordination and fine grip tests. Moreover, every patient showed variations in at least one of the functional parameters, either along Y-axis trajectory or Z-axis trajectory. This study might be a step forward for the investigation of new uses of motion capture systems in neurorehabilitation, making it possible to train activities of daily living (ADLs) in motivational environments while measuring objectively the patient's functional evolution. Implications for Rehabilitation Key findings: A motion capture application based on a data glove is presented, for being used as a virtual reality tool for rehabilitation. This application has provided objective data about patient's functional performance. What the study has added: (1) This study allows to open new areas of research based on the use of different motion capture systems as rehabilitation tools, making it possible to train Activities of Daily Living in motivational environments. (2) Furthermore, this study could be a contribution for the development of clinical protocols to identify which types of patients will benefit most from the VR treatments, which interfaces are more suitable to be used in neurorehabilitation, and what types of virtual exercises will work best.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhou, S; Cai, W; Hurwitz, M
Purpose: We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). Methods: Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motionmore » model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. Results: 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. Conclusion: This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less
Tuning strain of granular matter by basal assisted Couette shear
NASA Astrophysics Data System (ADS)
Zhao, Yiqiu; Barés, Jonathan; Zheng, Hu; Behringer, Robert
2017-06-01
We present a novel Couette shear apparatus capable of generating programmable azimuthal strain inside 2D granular matter under Couette shear. The apparatus consists of 21 independently movable concentric rings and two boundary wheels with frictional racks. This makes it possible to quasistatically shear the granular matter not only from the boundaries but also from the bottom. We show that, by specifying the collective motion of wheels and rings, the apparatus successfully generates the desired strain profile inside the sample granular system, which is composed of about 2000 photoelastic disks. The motion and stress of each particle is captured by an imaging system utilizing reflective photoelasticimetry. This apparatus provides a novel method to investigate shear jamming properties of granular matter with different interior strain profiles and unlimited strain amplitudes.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun
2005-07-01
The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.
Ma, Yingliang; Paterson, Helena M; Pollick, Frank E
2006-02-01
We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.
Motion cues that make an impression☆
Koppensteiner, Markus
2013-01-01
The current study presents a methodology to analyze first impressions on the basis of minimal motion information. In order to test the applicability of the approach brief silent video clips of 40 speakers were presented to independent observers (i.e., did not know speakers) who rated them on measures of the Big Five personality traits. The body movements of the speakers were then captured by placing landmarks on the speakers' forehead, one shoulder and the hands. Analysis revealed that observers ascribe extraversion to variations in the speakers' overall activity, emotional stability to the movements' relative velocity, and variation in motion direction to openness. Although ratings of openness and conscientiousness were related to biographical data of the speakers (i.e., measures of career progress), measures of body motion failed to provide similar results. In conclusion, analysis of motion behavior might be done on the basis of a small set of landmarks that seem to capture important parts of relevant nonverbal information. PMID:24223432
Ricci, L; Formica, D; Tamilia, E; Taffoni, F; Sparaci, L; Capirci, O; Guglielmelli, E
2013-01-01
Motion capture based on magneto-inertial sensors is a technology enabling data collection in unstructured environments, allowing "out of the lab" motion analysis. This technology is a good candidate for motion analysis of children thanks to the reduced weight and size as well as the use of wireless communication that has improved its wearability and reduced its obtrusivity. A key issue in the application of such technology for motion analysis is its calibration, i.e. a process that allows mapping orientation information from each sensor to a physiological reference frame. To date, even if there are several calibration procedures available for adults, no specific calibration procedures have been developed for children. This work addresses this specific issue presenting a calibration procedure for motion capture of thorax and upper limbs on healthy children. Reported results suggest comparable performance with similar studies on adults and emphasize some critical issues, opening the way to further improvements.
Modeling human behaviors and reactions under dangerous environment.
Kang, J; Wright, D K; Qin, S F; Zhao, Y
2005-01-01
This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.
Sudo, S; Ohtomo, T; Otsuka, K
2015-08-01
We achieved a highly sensitive method for observing the motion of colloidal particles in a flowing suspension using a self-mixing laser Doppler velocimeter (LDV) comprising a laser-diode-pumped thin-slice solid-state laser and a simple photodiode. We describe the measurement method and the optical system of the self-mixing LDV for real-time measurements of the motion of colloidal particles. For a condensed solution, when the light scattered from the particles is reinjected into the solid-state laser, the laser output is modulated in intensity by the reinjected laser light. Thus, we can capture the motion of colloidal particles from the spectrum of the modulated laser output. For a diluted solution, when the relaxation oscillation frequency coincides with the Doppler shift frequency, fd, which is related to the average velocity of the particles, the spectrum reflecting the motion of the colloidal particles is enhanced by the resonant excitation of relaxation oscillations. Then, the spectral peak reflecting the motion of colloidal particles appears at 2×fd. The spectrum reflecting the motion of colloidal particles in a flowing diluted solution can be measured with high sensitivity, owing to the enhancement of the spectrum by the thin-slice solid-state laser.
Sato, Nahoko; Nunome, Hiroyuki; Ikegami, Yasuo
2016-06-01
In hip-hop dance, the elements of motion that discriminate the skill levels of dancers and that influence the evaluations by judges have not been clearly identified. This study set out to extract these motion characteristics from the side-step movements of hip-hop dancing. Eight expert and eight non-expert dancers performed side-step movements, which were recorded using a motion capture system. Nine experienced judges evaluated the dancers' performances. Several parameters, including the range of motion (ROM) of the joint angles (neck, trunk, hip, knee, and face inclination) and phase delays between these angular motions were calculated. A quarter-cycle phase delay between the neck motion and other body parts, seen only in the expert dancers, is highlighted as an element that can distinguish dancers' skill levels. This feature of the expert dancers resulted in a larger ROM during the face inclination than that for the non-expert dancers. In addition, the experts exhibited a bottom-to-top segmental sequence in the horizontal direction while the non-experts did not demonstrate any such sequential motion. Of these kinematic parameters, only the ROM of the face inclination was highly correlated to the judging score and is regarded as being the most appealing element of the side-step movement.
Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.
King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T
2012-01-01
Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.
Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L.; Migliore, Elaina M.; Chipps, Esther M.; Buck, Jacalyn
2016-01-01
A fundamental understanding of multitasking within nursing workflow is important in today’s dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives. PMID:28269924
Purkayastha, Sagar N; Byrne, Michael D; O'Malley, Marcia K
2012-01-01
Gaming controllers are attractive devices for research due to their onboard sensing capabilities and low-cost. However, a proper quantitative analysis regarding their suitability for use in motion capture, rehabilitation and as input devices for teleoperation and gesture recognition has yet to be conducted. In this paper, a detailed analysis of the sensors of two of these controllers, the Nintendo Wiimote and the Sony Playstation 3 Sixaxis, is presented. The acceleration and angular velocity data from the sensors of these controllers were compared and correlated with computed acceleration and angular velocity data derived from a high resolution encoder. The results show high correlation between the sensor data from the controllers and the computed data derived from the position data of the encoder. From these results, it can be inferred that the Wiimote is more consistent and better suited for motion capture applications and as an input device than the Sixaxis. The applications of the findings are discussed with respect to potential research ventures.
Yen, Po-Yin; Kelley, Marjorie; Lopetegui, Marcelo; Rosado, Amber L; Migliore, Elaina M; Chipps, Esther M; Buck, Jacalyn
2016-01-01
A fundamental understanding of multitasking within nursing workflow is important in today's dynamic and complex healthcare environment. We conducted a time motion study to understand nursing workflow, specifically multitasking and task switching activities. We used TimeCaT, a comprehensive electronic time capture tool, to capture observational data. We established inter-observer reliability prior to data collection. We completed 56 hours of observation of 10 registered nurses. We found, on average, nurses had 124 communications and 208 hands-on tasks per 4-hour block of time. They multitasked (having communication and hands-on tasks simultaneously) 131 times, representing 39.48% of all times; the total multitasking duration ranges from 14.6 minutes to 109 minutes, 44.98 minutes (18.63%) on average. We also reviewed workflow visualization to uncover the multitasking events. Our study design and methods provide a practical and reliable approach to conducting and analyzing time motion studies from both quantitative and qualitative perspectives.
NASA Technical Reports Server (NTRS)
Vos, Gordon A.; Fink, Patrick; Ngo, Phong H.; Morency, Richard; Simon, Cory; Williams, Robert E.; Perez, Lance C.
2015-01-01
Space Human Factors and Habitability (SHFH) Element within the Human Research Program (HRP), in collaboration with the Behavioral Health and Performance (BHP) Element, is conducting research regarding Net Habitable Volume (NHV), the internal volume within a spacecraft or habitat that is available to crew for required activities, as well as layout and accommodations within that volume. NASA is looking for innovative methods to unobtrusively collect NHV data without impacting crew time. Data required includes metrics such as location and orientation of crew, volume used to complete tasks, internal translation paths, flow of work, and task completion times. In less constrained environments methods for collecting such data exist yet many are obtrusive and require significant post-processing. Example technologies used in terrestrial settings include infrared (IR) retro-reflective marker based motion capture, GPS sensor tracking, inertial tracking, and multiple camera filmography. However due to constraints of space operations many such methods are infeasible, such as inertial tracking systems which typically rely upon a gravity vector to normalize sensor readings, and traditional IR systems which are large and require extensive calibration. However multiple technologies have not yet been applied to space operations for these explicit purposes. Two of these include 3-Dimensional Radio Frequency Identification Real-Time Localization Systems (3D RFID-RTLS) and depth imaging systems which allow for 3D motion capture and volumetric scanning (such as those using IR-depth cameras like the Microsoft Kinect or Light Detection and Ranging / Light-Radar systems, referred to as LIDAR).
Mukherjee, Ramtanu; Ghosh, Sanchita; Gupta, Bharat; Chakravarty, Tapas
2018-01-22
The effectiveness of any remote healthcare monitoring system depends on how much accurate, patient-friendly, versatile, and cost-effective measurement it is delivering. There has always been a huge demand for such a long-term noninvasive remote blood pressure (BP) measurement system, which could be used worldwide in the remote healthcare industry. Thus, noninvasive continuous BP measurement and remote monitoring have become an emerging area in the remote healthcare industry. Photoplethysmography-based (PPG) BP measurement is a continuous, unobtrusive, patient-friendly, and cost-effective solution. However, BP measurements through PPG sensors are not much reliable and accurate due to some major limitations like pressure disturbance, motion artifacts, and variations in human skin tone. A novel reflective PPG sensor has been developed to eliminate the abovementioned pressure disturbance and motion artifacts during the BP measurement. Considering the variations of the human skin tone across demography, a novel algorithm has been developed to make the BP measurement accurate and reliable. The training dataset captured 186 subjects' data and the trial dataset captured another new 102 subjects' data. The overall accuracy achieved by using the proposed method is nearly 98%. Thus, demonstrating the efficacy of the proposed method. The developed BP monitoring system is quite accurate, reliable, cost-effective, handy, and user friendly. It is also expected that this system would be quite useful to monitor the BP of infants, elderly people, patients having wounds, burn injury, or in the intensive care unit environment.
The weaker effects of First-order mean motion resonances in intermediate inclinations
NASA Astrophysics Data System (ADS)
Chen, YuanYuan; Quillen, Alice C.; Ma, Yuehua; Chinese Scholar Council, the National Natural Science Foundation of China, the Natural Science Foundation of Jiangsu Province, the Minor Planet Foundation of the Purple Mountain Observatory
2017-10-01
During planetary migration, a planet or planetesimal can be captured into a low-order mean motion resonance with another planet. Using a second-order expansion of the disturbing function in eccentricity and inclination, we explore the sensitivity of the capture probability of first-order mean motion resonances to orbital inclination. We find that second-order inclination contributions affect the resonance strengths, reducing them at intermediate inclinations of around 10-40° for major first-order resonances. We also integrated the Hamilton's equations with arbitrary initial arguments, and provided the varying tendencies of resonance capture probabilities versus orbital inclinations for different resonances and different particle or planetary eccentricities. Resonance-weaker ranges in inclinations generally appear at the places where resonance strengths are low, around 10-40° in general. The weaker ranges disappear with a higher particle eccentricity (≳0.05) or planetary eccentricity (≳0.05). These resonance-weaker ranges in inclinations implies that intermediate-inclination objects are less likely to be disturbed or captured into the first-order resonances, which would make them entering into the chaotic area around Neptune with a larger fraction than those with low inclinations, during the epoch of Neptune's outward migration. The privilege of high-inclination particles leave them to be more likely captured into Neptune Trojans, which might be responsible for the unexpected high fraction of high-inclination Neptune Trojans.
A study of emergency American football helmet removal techniques.
Swartz, Erik E; Mihalik, Jason P; Decoster, Laura C; Hernandez, Adam E
2012-09-01
The purpose was to compare head kinematics between the Eject Helmet Removal System and manual football helmet removal. This quasi-experimental study was conducted in a controlled laboratory setting. Thirty-two certified athletic trainers (sex, 19 male and 13 female; age, 33 ± 10 years; height, 175 ± 12 cm; mass, 86 ± 20 kg) removed a football helmet from a healthy model under 2 conditions: manual helmet removal and Eject system helmet removal. A 6-camera motion capture system recorded 3-dimensional head position. Our outcome measures consisted of the average angular velocity and acceleration of the head in each movement plane (sagittal, frontal, and transverse), the resultant angular velocity and acceleration, and total motion. Paired-samples t tests compared each variable across the 2 techniques. Manual helmet removal elicited greater average angular velocity in the sagittal and transverse planes and greater resultant angular velocity compared with the Eject system. No differences were observed in average angular acceleration in any single plane of movement; however, the resultant angular acceleration was greater during manual helmet removal. The Eject Helmet Removal System induced greater total head motion. Although the Eject system created more motion at the head, removing a helmet manually resulted in more sudden perturbations as identified by resultant velocity and acceleration of the head. The implications of these findings relate to the care of all cervical spine-injured patients in emergency medical settings, particularly in scenarios where helmet removal is necessary. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Yu; Hu, Sijung; Azorin-Peris, Vicente; Greenwald, Stephen; Chambers, Jonathon; Zhu, Yisheng
2011-07-01
With the advance of computer and photonics technology, imaging photoplethysmography [(PPG), iPPG] can provide comfortable and comprehensive assessment over a wide range of anatomical locations. However, motion artifact is a major drawback in current iPPG systems, particularly in the context of clinical assessment. To overcome this issue, a new artifact-reduction method consisting of planar motion compensation and blind source separation is introduced in this study. The performance of the iPPG system was evaluated through the measurement of cardiac pulse in the hand from 12 subjects before and after 5 min of cycling exercise. Also, a 12-min continuous recording protocol consisting of repeated exercises was taken from a single volunteer. The physiological parameters (i.e., heart rate, respiration rate), derived from the images captured by the iPPG system, exhibit functional characteristics comparable to conventional contact PPG sensors. Continuous recordings from the iPPG system reveal that heart and respiration rates can be successfully tracked with the artifact reduction method even in high-intensity physical exercise situations. The outcome from this study thereby leads to a new avenue for noncontact sensing of vital signs and remote physiological assessment, with clear applications in triage and sports training.
Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis
NASA Astrophysics Data System (ADS)
Arendra, A.; Akhmad, S.
2018-01-01
This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly
Using automatic generation of Labanotation to protect folk dance
NASA Astrophysics Data System (ADS)
Wang, Jiaji; Miao, Zhenjiang; Guo, Hao; Zhou, Ziming; Wu, Hao
2017-01-01
Labanotation uses symbols to describe human motion and is an effective means of protecting folk dance. We use motion capture data to automatically generate Labanotation. First, we convert the motion capture data of the biovision hierarchy file into three-dimensional coordinate data. Second, we divide human motion into element movements. Finally, we analyze each movement and find the corresponding notation. Our work has been supervised by an expert in Labanotation to ensure the correctness of the results. At present, the work deals with a subset of symbols in Labanotation that correspond to several basic movements. Labanotation contains many symbols and several new symbols may be introduced for improvement in the future. We will refine our work to handle more symbols. The automatic generation of Labanotation can greatly improve the work efficiency of documenting movements. Thus, our work will significantly contribute to the protection of folk dance and other action arts.
Elastic network model of learned maintained contacts to predict protein motion
Putz, Ines
2017-01-01
We present a novel elastic network model, lmcENM, to determine protein motion even for localized functional motions that involve substantial changes in the protein’s contact topology. Existing elastic network models assume that the contact topology remains unchanged throughout the motion and are thus most appropriate to simulate highly collective function-related movements. lmcENM uses machine learning to differentiate breaking from maintained contacts. We show that lmcENM accurately captures functional transitions unexplained by the classical ENM and three reference ENM variants, while preserving the simplicity of classical ENM. We demonstrate the effectiveness of our approach on a large set of proteins covering different motion types. Our results suggest that accurately predicting a “deformation-invariant” contact topology offers a promising route to increase the general applicability of ENMs. We also find that to correctly predict this contact topology a combination of several features seems to be relevant which may vary slightly depending on the protein. Additionally, we present case studies of two biologically interesting systems, Ferric Citrate membrane transporter FecA and Arachidonate 15-Lipoxygenase. PMID:28854238
Beil, Jonas; Marquardt, Charlotte; Asfour, Tamim
2017-07-01
Kinematic compatibility is of paramount importance in wearable robotic and exoskeleton design. Misalignments between exoskeletons and anatomical joints of the human body result in interaction forces which make wearing the exoskeleton uncomfortable and even dangerous for the human. In this paper we present a kinematically compatible design of an exoskeleton hip to reduce kinematic incompatibilities, so called macro- and micro-misalignments, between the human's and exoskeleton's joint axes, which are caused by inter-subject variability and articulation. The resulting design consists of five revolute, three prismatic and one ball joint. Design parameters such as range of motion and joint velocities are calculated based on the analysis of human motion data acquired by motion capture systems. We show that the resulting design is capable of self-aligning to the human hip joint in all three anatomical planes during operation and can be adapted along the dorsoventral and mediolateral axis prior to operation. Calculation of the forward kinematics and FEM-simulation considering kinematic and musculoskeletal constraints proved sufficient mobility and stiffness of the system regarding the range of motion, angular velocity and torque admissibility needed to provide 50 % assistance for an 80 kg person.
Modeling moving systems with RELAP5-3D
Mesina, G. L.; Aumiller, David L.; Buschman, Francis X.; ...
2015-12-04
RELAP5-3D is typically used to model stationary, land-based reactors. However, it can also model reactors in other inertial and accelerating frames of reference. By changing the magnitude of the gravitational vector through user input, RELAP5-3D can model reactors on a space station or the moon. The field equations have also been modified to model reactors in a non-inertial frame, such as occur in land-based reactors during earthquakes or onboard spacecraft. Transient body forces affect fluid flow in thermal-fluid machinery aboard accelerating crafts during rotational and translational accelerations. It is useful to express the equations of fluid motion in the acceleratingmore » frame of reference attached to the moving craft. However, careful treatment of the rotational and translational kinematics is required to accurately capture the physics of the fluid motion. Correlations for flow at angles between horizontal and vertical are generated via interpolation where no experimental studies or data exist. The equations for three-dimensional fluid motion in a non-inertial frame of reference are developed. As a result, two different systems for describing rotational motion are presented, user input is discussed, and an example is given.« less
Technical skills measurement based on a cyber-physical system for endovascular surgery simulation.
Tercero, Carlos; Kodama, Hirokatsu; Shi, Chaoyang; Ooe, Katsutoshi; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Kwon, Guiryong; Najdovski, Zoran
2013-09-01
Quantification of medical skills is a challenge, particularly simulator-based training. In the case of endovascular intervention, it is desirable that a simulator accurately recreates the morphology and mechanical characteristics of the vasculature while enabling scoring. For this purpose, we propose a cyber-physical system composed of optical sensors for a catheter's body motion encoding, a magnetic tracker for motion capture of an operator's hands, and opto-mechatronic sensors for measuring the interaction of the catheter tip with the vasculature model wall. Two pilot studies were conducted for measuring technical skills, one for distinguishing novices from experts and the other for measuring unnecessary motion. The proficiency levels were measurable between expert and novice and also between individual novice users. The results enabled scoring of the user's proficiency level, using sensitivity, reaction time, time to complete a task and respect for tissue integrity as evaluation criteria. Additionally, unnecessary motion was also measurable. The development of cyber-physical simulators for other domains of medicine depend on the study of photoelastic materials for human tissue modelling, and enables quantitative evaluation of skills using surgical instruments and a realistic representation of human tissue. Copyright © 2012 John Wiley & Sons, Ltd.
Ting, Lai-Lei; Chuang, Ho-Chiao; Liao, Ai-Ho; Kuo, Chia-Chun; Yu, Hsiao-Wei; Zhou, Yi-Liang; Tien, Der-Chi; Jeng, Shiu-Chen; Chiou, Jeng-Fong
2018-05-01
This study proposed respiratory motion compensation system (RMCS) combined with an ultrasound image tracking algorithm (UITA) to compensate for respiration-induced tumor motion during radiotherapy, and to address the problem of inaccurate radiation dose delivery caused by respiratory movement. This study used an ultrasound imaging system to monitor respiratory movements combined with the proposed UITA and RMCS for tracking and compensation of the respiratory motion. Respiratory motion compensation was performed using prerecorded human respiratory motion signals and also sinusoidal signals. A linear accelerator was used to deliver radiation doses to GAFchromic EBT3 dosimetry film, and the conformity index (CI), root-mean-square error, compensation rate (CR), and planning target volume (PTV) were used to evaluate the tracking and compensation performance of the proposed system. Human respiratory pattern signals were captured using the UITA and compensated by the RMCS, which yielded CR values of 34-78%. In addition, the maximum coronal area of the PTV ranged from 85.53 mm 2 to 351.11 mm 2 (uncompensated), which reduced to from 17.72 mm 2 to 66.17 mm 2 after compensation, with an area reduction ratio of up to 90%. In real-time monitoring of the respiration compensation state, the CI values for 85% and 90% isodose areas increased to 0.7 and 0.68, respectively. The proposed UITA and RMCS can reduce the movement of the tracked target relative to the LINAC in radiation therapy, thereby reducing the required size of the PTV margin and increasing the effect of the radiation dose received by the treatment target. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid
Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko
2016-01-01
Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film‐based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers’ abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one‐dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers’ breathing patterns, the mean tracking error range was 0.78‐1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient. PACS number(s): 87.55.D‐, 87.55.km, 87.55.Qr, 87.56.Fc PMID:27074474
Simultaneous estimation of human and exoskeleton motion: A simplified protocol.
Alvarez, M T; Torricelli, D; Del-Ama, A J; Pinto, D; Gonzalez-Vargas, J; Moreno, J C; Gil-Agudo, A; Pons, J L
2017-07-01
Adequate benchmarking procedures in the area of wearable robots is gaining importance in order to compare different devices on a quantitative basis, improve them and support the standardization and regulation procedures. Performance assessment usually focuses on the execution of locomotion tasks, and is mostly based on kinematic-related measures. Typical drawbacks of marker-based motion capture systems, gold standard for measure of human limb motion, become challenging when measuring limb kinematics, due to the concomitant presence of the robot. This work answers the question of how to reliably assess the subject's body motion by placing markers over the exoskeleton. Focusing on the ankle joint, the proposed methodology showed that it is possible to reconstruct the trajectory of the subject's joint by placing markers on the exoskeleton, although foot flexibility during walking can impact the reconstruction accuracy. More experiments are needed to confirm this hypothesis, and more subjects and walking conditions are needed to better characterize the errors of the proposed methodology, although our results are promising, indicating small errors.
Rotating bouncing disks, tossing pizza dough, and the behavior of ultrasonic motors.
Liu, Kuang-Chen; Friend, James; Yeo, Leslie
2009-10-01
Pizza tossing and certain forms of standing-wave ultrasonic motors (SWUMs) share a similar process for converting reciprocating input into continuous rotary motion. We show that the key features of this motion conversion process such as collision, separation and friction coupling are captured by the dynamics of a disk bouncing on a vibrating platform. The model shows that the linear or helical hand motions commonly used by pizza chefs and dough-toss performers for single tosses maximize energy efficiency and the dough's airborne rotational speed; on the other hand, the semielliptical hand motions used for multiple tosses make it easier to maintain dough rotation at the maximum speed. The system's bifurcation diagram and basins of attraction also provide a physical basis for understanding the peculiar behavior of SWUMs and provide a means to design them. The model is able to explain the apparently chaotic oscillations that occur in SWUMs and predict the observed trends in steady-state speed and stall torque as preload is increased.
Huo, Xueliang; Park, Hangue; Kim, Jeonghee; Ghovanloo, Maysam
2015-01-01
We are presenting a new wireless and wearable human computer interface called the dual-mode Tongue Drive System (dTDS), which is designed to allow people with severe disabilities to use computers more effectively with increased speed, flexibility, usability, and independence through their tongue motion and speech. The dTDS detects users’ tongue motion using a magnetic tracer and an array of magnetic sensors embedded in a compact and ergonomic wireless headset. It also captures the users’ voice wirelessly using a small microphone embedded in the same headset. Preliminary evaluation results based on 14 able-bodied subjects and three individuals with high level spinal cord injuries at level C3–C5 indicated that the dTDS headset, combined with a commercially available speech recognition (SR) software, can provide end users with significantly higher performance than either unimodal forms based on the tongue motion or speech alone, particularly in completing tasks that require both pointing and text entry. PMID:23475380
Motion generation of robotic surgical tasks: learning from expert demonstrations.
Reiley, Carol E; Plaku, Erion; Hager, Gregory D
2010-01-01
Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This paper proposes using programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical's da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts' underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results suggest that this paper presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smoother trajectories for reproduction of three common medical tasks.
HSDPA (3.5G)-based ubiquitous integrated biotelemetry system for emergency care.
Kang, Jaemin; Shin, Il Hyung; Koo, Yoonseo; Jung, Min Yang; Suh, Gil Joon; Kim, Hee Chan
2007-01-01
We have developed the second prototype system of Ubiquitous Integrated Biotelemetry System for Emergency Care(UIBSEC) using a HSDPA(High Speed Downlink Packet Access) modem to be used by emergency rescuers to get directions from medical doctors in providing emergency medical services for patients in ambulance. Five vital bio-signal instrumentation modules have been implemented, which include noninvasive arterial blood pressure (NIBP), arterial oxygen saturation (SaO2), 6-channel electro-cardiogram(ECG), blood glucose level, and body temperature and real-time motion picture of the patient and GPS information are also taken. Measured patient data, captured motion picture and GPS information are then transferred to a doctor's PC through the HSDPA and TCP/IP networks using stand-alone HSDPA modem. Most prominent feature of the developed system is that it is based on the HSDPA backbone networks available in Korea now, through which we will be able to establish a ubiquitous emergency healthcare service system.
Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study.
Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan
2017-02-03
The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.
Video Analysis of Rolling Cylinders
ERIC Educational Resources Information Center
Phommarach, S.; Wattanakasiwich, P.; Johnston, I.
2012-01-01
In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…
Accuracy of an optical active-marker system to track the relative motion of rigid bodies.
Maletsky, Lorin P; Sun, Junyi; Morton, Nicholas A
2007-01-01
The measurement of relative motion between two moving bones is commonly accomplished for in vitro studies by attaching to each bone a series of either passive or active markers in a fixed orientation to create a rigid body (RB). This work determined the accuracy of motion between two RBs using an Optotrak optical motion capture system with active infrared LEDs. The stationary noise in the system was quantified by recording the apparent change in position with the RBs stationary and found to be 0.04 degrees and 0.03 mm. Incremental 10 degrees rotations and 10-mm translations were made using a more precise tool than the Optotrak. Increasing camera distance decreased the precision or increased the range of values observed for a set motion and increased the error in rotation or bias between the measured and actual rotation. The relative positions of the RBs with respect to the camera-viewing plane had a minimal effect on the kinematics and, therefore, for a given distance in the volume less than or close to the precalibrated camera distance, any motion was similarly reliable. For a typical operating set-up, a 10 degrees rotation showed a bias of 0.05 degrees and a 95% repeatability limit of 0.67 degrees. A 10-mm translation showed a bias of 0.03 mm and a 95% repeatability limit of 0.29 mm. To achieve a high level of accuracy it is important to keep the distance between the cameras and the markers near the distance the cameras are focused to during calibration.
Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach
NASA Astrophysics Data System (ADS)
Liu, Wenyang; Sawant, Amit; Ruan, Dan
2016-07-01
The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.
Lorenz, N D; Channon, S; Pettitt, R; Smirthwaite, P; Innes, J F
2015-01-01
Introduction of the Sirius® canine total elbow arthroplasty system, and presentation of the results of a passive range-of-motion analysis based on ex vivo kinematic studies pre-and post-implantation. Thoracic limbs (n = 4) of medium sized dogs were harvested by forequarter amputation. Plain orthogonal radiographs of each limb were obtained pre- and post-implantation. Limbs were prepared by placement of external fixator pins and Kirschner wires into the humerus and radius. Each limb was secured into a custom-made box frame and retro-reflective markers were placed on the exposed ends of the pins and wires. Each elbow was manually moved through five ranges-of-motion manoeuvres. Data collected included six trials of i) full extension to full flexion and ii) pronation and supination in 90° flexion; a three-dimensional motion capture system was used to collect and analyse the data. The Sirius elbow prosthesis was subsequently implanted and the same measurements were repeated. Data sets were tested for normality. Paired t-tests were used for comparison of pre- and post-implantation motion parameters. Kinematic analysis showed that the range-of-motion (mean and SD) for flexion and extension pre-implantation was 115° ± 6 (range: 25° to 140°). The range-of-motion in the sagittal plane post-implantation was 90° ± 4 (range: 36° to 130°) and this reduction was significant (p = 0.0001). The ranges-of-motion (mean and SD) for supination and pronation at 90° were 50° ± 5, whereas the corresponding mean ranges-of-motion post-implantation were 38° ± 6 (p = 0.0188). Compared to a normal elbow, the range-of-motion was reduced. Post-implantation, supination and pronation range-of-motion was significantly reduced at 90° over pre-implantation values. These results provide valuable information regarding the effect of the Sirius system on ex vivo kinematics of the normal canine elbow joint. Further, this particular ex vivo model allowed for satisfactory and repeatable kinematic analysis.
Recurrent network dynamics reconciles visual motion segmentation and integration.
Medathati, N V Kartheek; Rankin, James; Meso, Andrew I; Kornprobst, Pierre; Masson, Guillaume S
2017-09-12
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.
Varadarajan, Kartik M; Harry, Rubash E; Johnson, Todd; Li, Guoan
2009-10-01
In vitro systems provide a powerful means to evaluate the efficacy of total knee arthroplasty (TKA) in restoring normal knee kinematics. The Oxford knee rig (OKR) and the robotic knee testing system (RKTS) represent two systems that have been extensively used to study TKA biomechanics. Nonetheless, a frequently asked question is whether in vitro simulations can capture the in vivo behavior of the knee. Here, we compared the flexion-extension kinematics of intact knees and knees after TKA tested on the OKR and RKTS, to results of representative in vivo studies. The goal was to determine if the in vitro systems could capture the key kinematic features of knees in healthy subjects and TKA patients. Results showed that the RKTS and the OKR can replicate the femoral rollback and 'screw home' tibial rotation between 0 degrees and 30 degrees flexion seen in healthy subjects, and the reduced femoral rollback and absence of 'screw home' motion in TKA patients. The RKTS also replicated the overall internally rotated position of the tibia beyond 30 degrees flexion. However, ability of the OKR to replicate the internally rotated position of the knee beyond 30 degrees flexion was inconsistent. These data could aid in validation of new in vitro systems and physiologic interpretations of in vitro results.
Categorization of compensatory motions in transradial myoelectric prosthesis users.
Hussaini, Ali; Zinck, Arthur; Kyberd, Peter
2017-06-01
Prosthesis users perform various compensatory motions to accommodate for the loss of the hand and wrist as well as the reduced functionality of a prosthetic hand. Investigate different compensation strategies that are performed by prosthesis users. Comparative analysis. A total of 20 able-bodied subjects and 4 prosthesis users performed a set of bimanual activities. Movements of the trunk and head were recorded using a motion capture system and a digital video recorder. Clinical motion angles were calculated to assess the compensatory motions made by the prosthesis users. The video recording also assisted in visually identifying the compensations. Compensatory motions by the prosthesis users were evident in the tasks performed (slicing and stirring activities) as compared to the benchmark of able-bodied subjects. Compensations took the form of a measured increase in range of motion, an observed adoption of a new posture during task execution, and prepositioning of items in the workspace prior to initiating a given task. Compensatory motions were performed by prosthesis users during the selected tasks. These can be categorized into three different types of compensations. Clinical relevance Proper identification and classification of compensatory motions performed by prosthesis users into three distinct forms allows clinicians and researchers to accurately identify and quantify movement. It will assist in evaluating new prosthetic interventions by providing distinct terminology that is easily understood and can be shared between research institutions.
Real-time posture reconstruction for Microsoft Kinect.
Shum, Hubert P H; Ho, Edmond S L; Jiang, Yang; Takagi, Shu
2013-10-01
The recent advancement of motion recognition using Microsoft Kinect stimulates many new ideas in motion capture and virtual reality applications. Utilizing a pattern recognition algorithm, Kinect can determine the positions of different body parts from the user. However, due to the use of a single-depth camera, recognition accuracy drops significantly when the parts are occluded. This hugely limits the usability of applications that involve interaction with external objects, such as sport training or exercising systems. The problem becomes more critical when Kinect incorrectly perceives body parts. This is because applications have limited information about the recognition correctness, and using those parts to synthesize body postures would result in serious visual artifacts. In this paper, we propose a new method to reconstruct valid movement from incomplete and noisy postures captured by Kinect. We first design a set of measurements that objectively evaluates the degree of reliability on each tracked body part. By incorporating the reliability estimation into a motion database query during run time, we obtain a set of similar postures that are kinematically valid. These postures are used to construct a latent space, which is known as the natural posture space in our system, with local principle component analysis. We finally apply frame-based optimization in the space to synthesize a new posture that closely resembles the true user posture while satisfying kinematic constraints. Experimental results show that our method can significantly improve the quality of the recognized posture under severely occluded environments, such as a person exercising with a basketball or moving in a small room.
Neural dynamics of motion perception: direction fields, apertures, and resonant grouping.
Grossberg, S; Mingolla, E
1993-03-01
A neural network model of global motion segmentation by visual cortex is described. Called the motion boundary contour system (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyze how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The motion BCS describes how preprocessing of motion signals by a motion oriented contrast (MOC) filter is joined to long-range cooperative grouping mechanisms in a motion cooperative-competitive (MOCC) loop to control phenomena such as motion capture. The motion BCS is computed in parallel with the static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the motion BCS and the static BCS, specialized to process motion directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1-->MT and V1-->V2-->MT are made--notably, the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions of contrast. Interactions of model simple cells, complex cells, hyper-complex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.
Variational optical flow estimation for images with spectral and photometric sensor diversity
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-03-01
Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.
Real-time image mosaicing for medical applications.
Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth
2007-01-01
In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.
Computer Graphics Animation for Objective Self-Evaluation.
Usui, Yoko; Sato, Katsumi; Watabe, Shinichi
2017-01-01
The increased number of students enrolling in dance classes in Japan has resulted in a shortage of qualified instructors, leaving classes to be taught by instructors who are not trained in dance. The authors developed a system specifically designed to help nonqualified dance instructors teach dance using motion capture and animation. The goal is to allow dancers to easily self-evaluate their own performances by comparing it to a standard example.
Evans, Kerrie; Horan, Sean A; Neal, Robert J; Barrett, Rod S; Mills, Peter M
2012-06-01
Field-based methods of evaluating three-dimensional (3D) swing kinematics offer coaches and researchers the opportunity to assess golfers in context-specific environments. The purpose of this study was to establish the inter-trial, between-tester, between-location, and between-day repeatability of thorax and pelvis kinematics during the downswing using an electromagnetic motion capture system. Two experienced testers measured swing kinematics in 20 golfers (handicap < or =14 strokes) on consecutive days in an indoor and outdoor location. Participants performed five swings with each of two clubs (five-iron and driver) at each test condition. Repeatability of 3D kinematic data was evaluated by computing the coefficient of multiple determination (CMD) and the systematic error (SE). With the exception of pelvis forward bend for between-day and between-tester conditions, CMDs exceeded 0.854 for all variables, indicating high levels of overall waveform repeatability across conditions. When repeatability was compared across conditions using MANOVA, the lowest CMDs and highest SEs were found for the between-tester and between-day conditions. The highest CMDs were for the inter-trial and between-location conditions. The absence of significant differences in CMDs between these two conditions supports this method of analysing pelvis and thorax kinematics in different environmental settings without unduly affecting repeatability.
Motion Artifact Quantification and Sensor Fusion for Unobtrusive Health Monitoring.
Hoog Antink, Christoph; Schulz, Florian; Leonhardt, Steffen; Walter, Marian
2017-12-25
Sensors integrated into objects of everyday life potentially allow unobtrusive health monitoring at home. However, since the coupling of sensors and subject is not as well-defined as compared to a clinical setting, the signal quality is much more variable and can be disturbed significantly by motion artifacts. One way of tackling this challenge is the combined evaluation of multiple channels via sensor fusion. For robust and accurate sensor fusion, analyzing the influence of motion on different modalities is crucial. In this work, a multimodal sensor setup integrated into an armchair is presented that combines capacitively coupled electrocardiography, reflective photoplethysmography, two high-frequency impedance sensors and two types of ballistocardiography sensors. To quantify motion artifacts, a motion protocol performed by healthy volunteers is recorded with a motion capture system, and reference sensors perform cardiorespiratory monitoring. The shape-based signal-to-noise ratio SNR S is introduced and used to quantify the effect on motion on different sensing modalities. Based on this analysis, an optimal combination of sensors and fusion methodology is developed and evaluated. Using the proposed approach, beat-to-beat heart-rate is estimated with a coverage of 99.5% and a mean absolute error of 7.9 ms on 425 min of data from seven volunteers in a proof-of-concept measurement scenario.
NASA Astrophysics Data System (ADS)
Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.
2017-03-01
Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.
A Multimedia, Augmented Reality Interactive System for the Application of a Guided School Tour
NASA Astrophysics Data System (ADS)
Lin, Ko-Chun; Huang, Sheng-Wen; Chu, Sheng-Kai; Su, Ming-Wei; Chen, Chia-Yen; Chen, Chi-Fa
The paper describes an implementation of a multimedia, augmented reality system used for a guided school tour. The aim of this work is to improve the level of interactions between a viewer and the system by means of augmented reality. In the implemented system, hand motions are captured via computer vision based approaches and analyzed to extract representative actions which are used to interact with the system. In this manner, tactile peripheral hardware such as keyboard and mouse can be eliminated. In addition, the proposed system also aims to reduce hardware related costs and avoid health risks associated with contaminations by contact in public areas.
Origin scenarios for the Kepler 36 planetary system
NASA Astrophysics Data System (ADS)
Quillen, Alice C.; Bodman, Eva; Moore, Alexander
2013-11-01
We explore scenarios for the origin of two different density planets in the Kepler 36 system in adjacent orbits near the 7:6 mean motion resonance. We find that fine tuning is required in the stochastic forcing amplitude, the migration rate and planet eccentricities to allow two convergently migrating planets to bypass mean motion resonances such as the 4:3, 5:4 and 6:5, and yet allow capture into the 7:6 resonance. Stochastic forcing can eject the system from resonance causing a collision between the planets, unless the disc causing migration and stochastic forcing is depleted soon after resonance capture. We explore a scenario with approximately Mars mass embryos originating exterior to the two planets and migrating inwards towards two planets. We find that gravitational interactions with embryos can nudge the system out of resonances. Numerical integrations with about a half dozen embryos can leave the two planets in the 7:6 resonance. Collisions between planets and embryos have a wide distribution of impact angles and velocities ranging from accretionary to disruptive. We find that impacts can occur at sufficiently high impact angle and velocity that the envelope of a planet could have been stripped, leaving behind a dense core. Some of our integrations show the two planets exchanging locations, allowing the outer planet that had experienced multiple collisions with embryos to become the innermost planet. A scenario involving gravitational interactions and collisions with embryos may account for both the proximity of the Kepler 36 planets and their large density contrast.
Shape Distributions of Nonlinear Dynamical Systems for Video-Based Inference.
Venkataraman, Vinay; Turaga, Pavan
2016-12-01
This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small/variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
NASA Astrophysics Data System (ADS)
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; E Carson, Richard
2017-06-01
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; Carson, Richard E
2017-06-21
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18 F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
Representing the thermal state in time-dependent density functional theory
Modine, N. A.; Hatcher, R. M.
2015-05-28
Classical molecular dynamics (MD) provides a powerful and widely used approach to determining thermodynamic properties by integrating the classical equations of motion of a system of atoms. Time-Dependent Density Functional Theory (TDDFT) provides a powerful and increasingly useful approach to integrating the quantum equations of motion for a system of electrons. TDDFT efficiently captures the unitary evolution of a many-electron state by mapping the system into a fictitious non-interacting system. In analogy to MD, one could imagine obtaining the thermodynamic properties of an electronic system from a TDDFT simulation in which the electrons are excited from their ground state bymore » a time-dependent potential and then allowed to evolve freely in time while statistical data are captured from periodic snapshots of the system. For a variety of systems (e.g., many metals), the electrons reach an effective state of internal equilibrium due to electron-electron interactions on a time scale that is short compared to electron-phonon equilibration. During the initial time-evolution of such systems following electronic excitation, electron-phonon interactions should be negligible, and therefore, TDDFT should successfully capture the internal thermalization of the electrons. However, it is unclear how TDDFT represents the resulting thermal state. In particular, the thermal state is usually represented in quantum statistical mechanics as a mixed state, while the occupations of the TDDFT wave functions are fixed by the initial state in TDDFT. Two key questions involve (1) reformulating quantum statistical mechanics so that thermodynamic expectations can be obtained as an unweighted average over a set of many-body pure states and (2) constructing a family of non-interacting (single determinant) TDDFT states that approximate the required many-body states for the canonical ensemble. In Section II, we will address these questions by first demonstrating that thermodynamic expectations can be evaluated by averaging over certain many-body pure states, which we will call thermal states, and then constructing TDDFT states that approximate these thermal states. In Section III, we will present some numerical tests of the resulting theory, and in Section IV, we will summarize our main results and discuss some possible future directions for this work.« less
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
2008-07-02
CAPE CANAVERAL, Fla. – A United Space Alliance technician (right) hands off a component of the Orion Crew Module mockup to one of the other technicians inside the mockup. The technicians wear motion capture suits. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup, which was created and built at the New York Institute of Technology by a team led by Prof. Peter Voci, MFA Director at the College of Arts and Sciences. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.
Gibbon, K C; Debuse, D; Caplan, N
2013-10-01
The aim of this study was to determine the kinematic differences between movements on a new exercise device (EX) that promotes a stable trunk over a moving, unstable base of support, and overground walking (OW). Sixteen male participants performed EX and OW trials while their movements were tracked using a 3D motion capture system. Trunk and pelvis range of motion (ROM) were similar between EX and OW in the sagittal and frontal planes, and reduced for EX in the transverse plane. The pelvis was tilted anteriorly, on average, by about 16° in EX compared to OW. Hip and knee ROM were reduced in EX compared to OW. The exercise device appears to promote similar or reduced lumbopelvic motion, compared to walking, which could contribute to more tonic activity of the local lumbopelvic musculature. Copyright © 2013 Elsevier Ltd. All rights reserved.
Teasing Apart Complex Motions using VideoPoint
NASA Astrophysics Data System (ADS)
Fischer, Mark
2002-10-01
Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.
Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran Dk; Bird, Marie-Louise
2018-02-01
To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Randomized controlled clinical trial. Two subacute hospital rehabilitation units in Australia. In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8-40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (-3.0 to 5.0) cm) with no difference between groups ( P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) ( P = 0.04). No adverse events were recorded during therapy. Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
Duran, Cassidy; Estrada, Sean; O'Malley, Marcia; Lumsden, Alan B; Bismuth, Jean
2015-02-01
Endovascular robotics systems, now approved for clinical use in the United States and Europe, are seeing rapid growth in interest. Determining who has sufficient expertise for safe and effective clinical use remains elusive. Our aim was to analyze performance on a robotic platform to determine what defines an expert user. During three sessions, 21 subjects with a range of endovascular expertise and endovascular robotic experience (novices <2 hours to moderate-extensive experience with >20 hours) performed four tasks on a training model. All participants completed a 2-hour training session on the robot by a certified instructor. Completion times, global rating scores, and motion metrics were collected to assess performance. Electromagnetic tracking was used to capture and to analyze catheter tip motion. Motion analysis was based on derivations of speed and position including spectral arc length and total number of submovements (inversely proportional to proficiency of motion) and duration of submovements (directly proportional to proficiency). Ninety-eight percent of competent subjects successfully completed the tasks within the given time, whereas 91% of noncompetent subjects were successful. There was no significant difference in completion times between competent and noncompetent users except for the posterior branch (151 s:105 s; P = .01). The competent users had more efficient motion as evidenced by statistically significant differences in the metrics of motion analysis. Users with >20 hours of experience performed significantly better than those newer to the system, independent of prior endovascular experience. This study demonstrates that motion-based metrics can differentiate novice from trained users of flexible robotics systems for basic endovascular tasks. Efficiency of catheter movement, consistency of performance, and learning curves may help identify users who are sufficiently trained for safe clinical use of the system. This work will help identify the learning curve and specific movements that translate to expert robotic navigation. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Assessment method of digital Chinese dance movements based on virtual reality technology
NASA Astrophysics Data System (ADS)
Feng, Wei; Shao, Shuyuan; Wang, Shumin
2008-03-01
Virtual reality has played an increasing role in such areas as medicine, architecture, aviation, engineering science and advertising. However, in the art fields, virtual reality is still in its infancy in the representation of human movements. Based on the techniques of motion capture and reuse of motion capture data in virtual reality environment, this paper presents an assessment method in order to evaluate the quantification of dancers' basic Arm Position movements in Chinese traditional dance. In this paper, the data for quantifying traits of dance motions are defined and measured on dancing which performed by an expert and two beginners, with results indicating that they are beneficial for evaluating dance skills and distinctiveness, and the assessment method of digital Chinese dance movements based on virtual reality technology is validity and feasibility.
The Hills are Alive: Dynamic Ridges and Valleys in a Strike-Slip Environment
NASA Astrophysics Data System (ADS)
Duvall, A. R.; Tucker, G. E.
2014-12-01
Strike-slip fault zones have long been known for characteristic landforms such as offset and deflected rivers, linear strike-parallel valleys, and shutter ridges. Despite their common presence, questions remain about the mechanics of how these landforms arise or how their form varies as a function of slip rate, geomorphic process, or material properties. We know even less about what happens far from the fault, in drainage basin headwaters, as a result of strike-slip motion. Here we explore the effects of horizontal fault slip rate, bedrock erodibility, and hillslope diffusivity on river catchments that drain across an active strike-slip fault using the CHILD landscape evolution model. Model calculations demonstrate that lateral fault motion induces a permanent state of landscape disequilibrium brought about by fault offset-generated river lengthening alternating with abrupt shortening due to stream capture. This cycle of shifting drainage patterns and base level change continues until fault motion ceases thus creating a perpetual state of transience unique to strike-slip systems. Our models also make the surprising prediction that, in some cases, hillslope ridges oriented perpendicular to the fault migrate laterally in conjunction with fault motion. Ridge migration happens when slip rate is slow enough and/or diffusion and river incision are fast enough that the hillslopes can respond to the disequilibrium brought about by strike-slip motion. In models with faster slip rates, stronger rocks or less-diffusive hillslopes, ridge mobility is limited or arrested despite the fact that the process of river lengthening and capture continues. Fast-slip cases also develop prominent steep fault-facing hillslope facets proximal to the fault valley and along-strike topographic profiles with reduced local relief between ridges and valleys. Our results demonstrate the dynamic nature of strike-slip landscapes that vary systematically with a ratio of bedrock erodibility (K) and hillslope diffusivity (D) to the rate of horizontal advection of topography (v). These results also reveal a potential set of recognizable geomorphic signatures within strike-slip systems that should be looked to as indicators of fault activity and/or material properties.
Effect of Dimension and Shape of Magnet on the Performance AC Generator with Translation Motion
NASA Astrophysics Data System (ADS)
Indriani, A.; Dimas, S.; Hendra
2018-02-01
The development of power plants using the renewable energy sources is very rapid. Renewable energy sources used solar energy, wind energy, ocean wave energy and other energy. All of these renewable energy sources require a processing device or a change of motion system to become electrical energy. One processing device is a generator which have work principle of converting motion (mechanical) energy into electrical energy with rotary shaft, blade and other motion components. Generator consists of several types of rotation motion and linear motion (translational). The generator have components such as rotor, stator and anchor. In the rotor and stator having magnet and winding coil as an electric generating part of the electric motion force. Working principle of AC generator with linear motion (translation) also apply the principle of Faraday that is using magnetic induction which change iron magnet to produce magnetic flux. Magnetic flux is captured by the stator to be converted into electrical energy. Linear motion generators consist of linear induction machine, wound synchronous machine field, and permanent magnet synchronous [1]. Performance of synchronous generator of translation motion is influenced by magnet type, magnetic shape, coil winding, magnetic and coil spacing and others. In this paper focus on the neodymium magnet with varying shapes, number of coil windings and gap of magnetic distances. This generator work by using pneumatic mechanism (PLTGL) for power plants system. Result testing of performance AC generator translation motion obtained that maximum voltage, current and power are 63 Volt for diameter winding coil 0.15 mm, number of winding coil 13000 and distance of magnet 20 mm. For effect shape of magnet, maximum voltage happen on rectangle magnet 30x20x5 mm with 4.64 Volt. Voltage and power on effect of diameter winding coil is 14.63 V and 17.82 W at the diameter winding coil 0.7 and number of winding coil is 1260 with the distance of magnet 25 mm.
Early Detection of Infection in Pigs through an Online Monitoring System.
Martínez-Avilés, M; Fernández-Carrión, E; López García-Baones, J M; Sánchez-Vizcaíno, J M
2017-04-01
Late detection of emergency diseases causes significant economic losses for pig producers and governments. As the first signs of animal infection are usually fever and reduced motion that lead to reduced consumption of water and feed, we developed a novel smart system to monitor body temperature and motion in real time, facilitating the early detection of infectious diseases. In this study, carried out within the framework of the European Union research project Rapidia Field, we tested the smart system on 10 pigs experimentally infected with two doses of an attenuated strain of African swine fever. Biosensors and an accelerometer embedded in an eartag captured data before and after infection, and video cameras were used to monitor the animals 24 h per day. The results showed that in 8 of 9 cases, the monitoring system detected infection onset as an increase in body temperature and decrease in movement before or simultaneously with fever detection based on rectal temperature measurement, observation of clinical signs, the decrease in water consumption or positive qPCR detection of virus. In addition, this decrease in movement was reliably detected using automatic analysis of video images therefore providing an inexpensive alternative to direct motion measurement. The system can be set up to alert staff when high fever, reduced motion or both are detected in one or more animals. This system may be useful for monitoring sentinel herds in real time, considerably reducing the financial and logistical costs of periodic sampling and increasing the chances of early detection of infection. © 2015 Blackwell Verlag GmbH.
Fuzzy logic-based flight control system design
NASA Astrophysics Data System (ADS)
Nho, Kyungmoon
The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.
Motion prediction of a non-cooperative space target
NASA Astrophysics Data System (ADS)
Zhou, Bang-Zhao; Cai, Guo-Ping; Liu, Yun-Meng; Liu, Pan
2018-01-01
Capturing a non-cooperative space target is a tremendously challenging research topic. Effective acquisition of motion information of the space target is the premise to realize target capture. In this paper, motion prediction of a free-floating non-cooperative target in space is studied and a motion prediction algorithm is proposed. In order to predict the motion of the free-floating non-cooperative target, dynamic parameters of the target must be firstly identified (estimated), such as inertia, angular momentum and kinetic energy and so on; then the predicted motion of the target can be acquired by substituting these identified parameters into the Euler's equations of the target. Accurate prediction needs precise identification. This paper presents an effective method to identify these dynamic parameters of a free-floating non-cooperative target. This method is based on two steps, (1) the rough estimation of the parameters is computed using the motion observation data to the target, and (2) the best estimation of the parameters is found by an optimization method. In the optimization problem, the objective function is based on the difference between the observed and the predicted motion, and the interior-point method (IPM) is chosen as the optimization algorithm, which starts at the rough estimate obtained in the first step and finds a global minimum to the objective function with the guidance of objective function's gradient. So the speed of IPM searching for the global minimum is fast, and an accurate identification can be obtained in time. The numerical results show that the proposed motion prediction algorithm is able to predict the motion of the target.
Are recent empirical directivity models sufficient in capturing near-fault directivity effect?
NASA Astrophysics Data System (ADS)
Chen, Yen-Shin; Cotton, Fabrice; Pagani, Marco; Weatherill, Graeme; Reshi, Owais; Mai, Martin
2017-04-01
It has been widely observed that the ground motion variability in the near field can be significantly higher than that commonly reported in published GMPEs, and this has been suggested to be a consequence of directivity. To capture the spatial variation in ground motion amplitude and frequency caused by the near-fault directivity effect, several models for engineering applications have been developed using empirical or, more recently, the combination of empirical and simulation data. Many research works have indicated that the large velocity pulses mainly observed in the near-field are primarily related to slip heterogeneity (i.e., asperities), suggesting that the slip heterogeneity is a more dominant controlling factor than the rupture velocity or source rise time function. The first generation of broadband directivity models for application in ground motion prediction do not account for heterogeneity of slip and rupture speed. With the increased availability of strong motion recordings (e.g., NGA-West 2 database) in the near-fault region, the directivity models moved from broadband to narrowband models to include the magnitude dependence of the period of the rupture directivity pulses, wherein the pulses are believed to be closely related to the heterogeneity of slip distribution. After decades of directivity models development, does the latest generation of models - i.e. the one including narrowband directivity models - better capture the near-fault directivity effects, particularly in presence of strong slip heterogeneity? To address this question, a set of simulated motions for an earthquake rupture scenario, with various kinematic slip models and hypocenter locations, are used as a basis for a comparison with the directivity models proposed by the NGA-West 2 project for application with ground motion prediction equations incorporating a narrowband directivity model. The aim of this research is to gain better insights on the accuracy of narrowband directivity models under conditions commonly encountered in the real world. Our preliminary result shows that empirical models including directivity factors better predict physics based ground-motion and their spatial variability than classical empirical models. However, the results clearly indicate that it is still a challenge for the directivity models to capture the strong directivity effect if a high level of slip heterogeneity is involved during the source rupture process.
Bilayer segmentation of webcam videos using tree-based classifiers.
Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan
2011-01-01
This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.
Deducing the reachable space from fingertip positions.
Hai-Trieu Pham; Pathirana, Pubudu N
2015-01-01
The reachable space of the hand has received significant interests in the past from relevant medical researchers and health professionals. The reachable space was often computed from the joint angles acquired from a motion capture system such as gloves or markers attached to each bone of the finger. However, the contact between the hand and device can cause difficulties particularly for hand with injuries, burns or experiencing certain dermatological conditions. This paper introduces an approach to find the reachable space of the hand in a non-contact measurement form utilizing the Leap Motion Controller. The approach is based on the analysis of each position in the motion path of the fingertip acquired by the Leap Motion Controller. For each position of the fingertip, the inverse kinematic problem was solved under the physiological multiple constraints of the human hand to find a set of all possible configurations of three finger joints. Subsequently, all the sets are unified to form a set of all possible configurations specific for that motion. Finally, a reachable space is computed from the configuration corresponding to the complete extension and the complete flexion of the finger joint angles in this set.
Magnetic domain wall creep and depinning: A scalar field model approach
NASA Astrophysics Data System (ADS)
Caballero, Nirvana B.; Ferrero, Ezequiel E.; Kolton, Alejandro B.; Curiale, Javier; Jeudy, Vincent; Bustingorry, Sebastian
2018-06-01
Magnetic domain wall motion is at the heart of new magnetoelectronic technologies and hence the need for a deeper understanding of domain wall dynamics in magnetic systems. In this context, numerical simulations using simple models can capture the main ingredients responsible for the complex observed domain wall behavior. We present a scalar field model for the magnetization dynamics of quasi-two-dimensional systems with a perpendicular easy axis of magnetization which allows a direct comparison with typical experimental protocols, used in polar magneto-optical Kerr effect microscopy experiments. We show that the thermally activated creep and depinning regimes of domain wall motion can be reached and the effect of different quenched disorder implementations can be assessed with the model. In particular, we show that the depinning field increases with the mean grain size of a Voronoi tessellation model for the disorder.
3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila
NASA Astrophysics Data System (ADS)
Kumar, S. Santosh; Sun, Yaning; Zou, Sige; Hong, Jiarong
2016-09-01
Drosophila is an excellent model organism towards understanding the cognitive function, aging and neurodegeneration in humans. The effects of aging and other long-term dynamics on the behavior serve as important biomarkers in identifying such changes to the brain. In this regard, we are presenting a new imaging technique for lifetime monitoring of Drosophila in 3D at spatial and temporal resolutions capable of resolving the motion of limbs and wings using holographic principles. The developed system is capable of monitoring and extracting various behavioral parameters, such as ethograms and spatial distributions, from a group of flies simultaneously. This technique can image complicated leg and wing motions of flies at a resolution, which allows capturing specific landing responses from the same data set. Overall, this system provides a unique opportunity for high throughput screenings of behavioral changes in 3D over a long term in Drosophila.
Conformal piezoelectric energy harvesting and storage from motions of the heart, lung, and diaphragm
Dagdeviren, Canan; Yang, Byung Duk; Su, Yewang; Tran, Phat L.; Joe, Pauline; Anderson, Eric; Xia, Jing; Doraiswamy, Vijay; Dehdashti, Behrooz; Feng, Xue; Lu, Bingwei; Poston, Robert; Khalpey, Zain; Ghaffari, Roozbeh; Huang, Yonggang; Slepian, Marvin J.; Rogers, John A.
2014-01-01
Here, we report advanced materials and devices that enable high-efficiency mechanical-to-electrical energy conversion from the natural contractile and relaxation motions of the heart, lung, and diaphragm, demonstrated in several different animal models, each of which has organs with sizes that approach human scales. A cointegrated collection of such energy-harvesting elements with rectifiers and microbatteries provides an entire flexible system, capable of viable integration with the beating heart via medical sutures and operation with efficiencies of ∼2%. Additional experiments, computational models, and results in multilayer configurations capture the key behaviors, illuminate essential design aspects, and offer sufficient power outputs for operation of pacemakers, with or without battery assist. PMID:24449853
Lockhart, Thurmon E; Soangra, Rahul; Zhang, Jian; Wu, Xuefan
2013-01-01
Mobility characteristics associated with activity of daily living such as sitting down, lying down, rising up, and walking are considered to be important in maintaining functional independence and healthy life style especially for the growing elderly population. Characteristics of postural transitions such as sit-to-stand are widely used by clinicians as a physical indicator of health, and walking is used as an important mobility assessment tool. Many tools have been developed to assist in the assessment of functional levels and to detect a persons activities during daily life. These include questionnaires, observation, diaries, kinetic and kinematic systems, and validated functional tests. These measures are costly and time consuming, rely on subjective patient recall and may not accurately reflect functional ability in the patients home. In order to provide a low-cost, objective assessment of functional ability, inertial measurement unit (IMU) using MEMS technology has been employed to ascertain ADLs. These measures facilitate long-term monitoring of activity of daily living using wearable sensors. IMU system are desirable in monitoring human postures since they respond to both frequency and the intensity of movements and measure both dc (gravitational acceleration vector) and ac (acceleration due to body movement) components at a low cost. This has enabled the development of a small, lightweight, portable system that can be worn by a free-living subject without motion impediment TEMPO (Technology Enabled Medical Precision Observation). Using this IMU system, we acquired indirect measures of biomechanical variables that can be used as an assessment of individual mobility characteristics with accuracy and recognition rates that are comparable to the modern motion capture systems. In this study, five subjects performed various ADLs and mobility measures such as posture transitions and gait characteristics were obtained. We developed postural event detection and classification algorithm using denoised signals from single wireless IMU placed at sternum. The algorithm was further validated and verified with motion capture system in laboratory environment. Wavelet denoising highlighted postural events and transition durations that further provided clinical information on postural control and motor coordination. The presented method can be applied in real life ambulatory monitoring approaches for assessing condition of elderly.
Wavelet based automated postural event detection and activity classification with single IMU (TEMPO)
Lockhart, Thurmon E.; Soangra, Rahul; Zhang, Jian; Wu, Xuefang
2013-01-01
Mobility characteristics associated with activity of daily living such as sitting down, lying down, rising up, and walking are considered to be important in maintaining functional independence and healthy life style especially for the growing elderly population. Characteristics of postural transitions such as sit-to-stand are widely used by clinicians as a physical indicator of health, and walking is used as an important mobility assessment tool. Many tools have been developed to assist in the assessment of functional levels and to detect a person’s activities during daily life. These include questionnaires, observation, diaries, kinetic and kinematic systems, and validated functional tests. These measures are costly and time consuming, rely on subjective patient recall and may not accurately reflect functional ability in the patient’s home. In order to provide a low-cost, objective assessment of functional ability, inertial measurement unit (IMU) using MEMS technology has been employed to ascertain ADLs. These measures facilitate long-term monitoring of activity of daily living using wearable sensors. IMU system are desirable in monitoring human postures since they respond to both frequency and the intensity of movements and measure both dc (gravitational acceleration vector) and ac (acceleration due to body movement) components at a low cost. This has enabled the development of a small, lightweight, portable system that can be worn by a free-living subject without motion impediment - TEMPO. Using the TEMPO system, we acquired indirect measures of biomechanical variables that can be used as an assessment of individual mobility characteristics with accuracy and recognition rates that are comparable to the modern motion capture systems. In this study, five subjects performed various ADLs and mobility measures such as posture transitions and gait characteristics were obtained. We developed postural event detection and classification algorithm using denoised signals from single wireless inertial measurement unit (TEMPO) placed at sternum. The algorithm was further validated and verified with motion capture system in laboratory environment. Wavelet denoising highlighted postural events and transition durations that further provided clinical information on postural control and motor coordination. The presented method can be applied in real life ambulatory monitoring approaches for assessing condition of elderly. PMID:23686204
Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking
NASA Astrophysics Data System (ADS)
Antonya, C.
2017-12-01
Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.
Mapping the stability field of Jupiter Trojans
NASA Technical Reports Server (NTRS)
Levison, H. F.; Shoemaker, E. M.; Wolfe, R. F.
1991-01-01
Jupiter Trojans are a remnant of outer solar system planetesimals captured into stable or quasistable libration about the 1:1 resonance with the mean motion of Jupiter. The observed swarms of Trojans may provide insight into the original mass of condensed solids in the zone from which the Jovian planets accumulated, provided that the mechanisms of capture can be understood. As the first step toward this understanding, the stability field of Trojans were mapped in the coordinate proper eccentricity, e(sub p), and libration amplitude, D. To accomplish this mapping, the orbits of 100 particles with e(sub p) in the range of 0 to 0.8 and D in the range 0 to 140 deg were numerically integrated. Orbits of the Sun, the four Jovian planets, and the massless particles were integrated as a full N-body system, in a barycentric frame using fourth order symplectic scheme.
SMART USE OF COMPUTER-AIDED SPERM ANALYSIS (CASA) TO CHARACTERIZE SPERM MOTION
Computer-aided sperm analysis (CASA) has evolved over the past fifteen years to provide an objective, practical means of measuring and characterizing the velocity and parttern of sperm motion. CASA instruments use video frame-grabber boards to capture multiple images of spermato...
NASA Has Joined America True's Design Mission for 2000
NASA Technical Reports Server (NTRS)
Steele, Gynelle C.
1999-01-01
Engineers at the NASA Lewis Research Center will support the America True design team led by America s Cup innovator Phil Kaiko. The joint effort between NASA and America True is encouraged by Mission HOME, the official public awareness campaign of the U.S. space community. NASA Lewis and America True have entered into a Space Act Agreement to focus on the interaction between the airfoil and the large deformation of the pretensioned sails and rigs along with the dynamic motions related to the boat motions. This work will require a coupled fluid and structural simulation. Included in the simulation will be both a steadystate capability, to capture the quasi-state interactions between the air loads and sail geometry and the lift and drag on the boat, and a transient capability, to capture the sail/mast pumping effects resulting from hull motions.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
NASA Astrophysics Data System (ADS)
Chaudhary, Ujwal; Thompson, Bryant; Gonzalez, Jean; Jung, Young-Jin; Davis, Jennifer; Gonzalez, Patricia; Rice, Kyle; Bloyer, Martha; Elbaum, Leonard; Godavarty, Anuradha
2013-03-01
Cerebral palsy (CP) is a term that describes a group of motor impairment syndromes secondary to genetic and/or acquired disorders of the developing brain. In the current study, NIRS and motion capture were used simultaneously to correlate the brain's planning and execution activity during and with arm movement in healthy individual. The prefrontal region of the brain is non-invasively imaged using a custom built continuous-wave based near infrared spectroscopy (NIRS) system. The kinematics of the arm movement during the studies is recorded using an infrared based motion capture system, Qualisys. During the study, the subjects (over 18 years) performed 30 sec of arm movement followed by 30 sec rest for 5 times, both with their dominant and non-dominant arm. The optical signal acquired from NIRS system was processed to elucidate the activation and lateralization in the prefrontal region of participants. The preliminary results show difference, in terms of change in optical response, between task and rest in healthy adults. Currently simultaneous NIRS imaging and kinematics data are acquired in healthy individual and individual with CP in order to correlate brain activity to arm movement in real-time. The study has significant implication in elucidating the evolution in the functional activity of the brain as the physical movement of the arm evolves using NIRS. Hence the study has potential in augmenting the designing of training and hence rehabilitation regime for individuals with CP via kinematic monitoring and imaging brain activity.
NASA Astrophysics Data System (ADS)
Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo
2016-10-01
Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.
Transitions between homogeneous phases of polar active liquids
NASA Astrophysics Data System (ADS)
Dauchot, Olivier; Nguyen Thu Lam, Khanh Dang; Schindler, Michael; EC2M Team; PCT Team
2015-03-01
Polar active liquids, composed of aligning self-propelled particle exhibit large scale collective motion. Simulations of Vicsek-like models of constant-speed point particles, aligning with their neighbors in the presence of noise, have revealed the existence of a transition towards a true long range order polar-motion phase. Generically, the homogenous polar state is unstable; non-linear propagative structures develop; and the transition is discontinuous. The long range dynamics of these systems has been successfully captured using various scheme of kinetic theories. However the complexity of the dynamics close to the transition has somewhat hindered more basics questions. Is there a simple way to predict the existence and the order of a transition to collective motion for a given microscopic dynamics? What would be the physically meaningful and relevant quantity to answer this question? Here, we tackle these questions, restricting ourselves to the study of the homogeneous phases of polar active liquids in the low density limit and obtain a very intuitive understanding of the conditions which particle interaction must satisfy to induce a transition towards collective motion.
Kinematic parameters of signed verbs.
Malaia, Evie; Wilbur, Ronnie B; Milkovic, Marina
2013-10-01
Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.
Satellite attitude motion models for capture and retrieval investigations
NASA Technical Reports Server (NTRS)
Cochran, John E., Jr.; Lahr, Brian S.
1986-01-01
The primary purpose of this research is to provide mathematical models which may be used in the investigation of various aspects of the remote capture and retrieval of uncontrolled satellites. Emphasis has been placed on analytical models; however, to verify analytical solutions, numerical integration must be used. Also, for satellites of certain types, numerical integration may be the only practical or perhaps the only possible method of solution. First, to provide a basis for analytical and numerical work, uncontrolled satellites were categorized using criteria based on: (1) orbital motions, (2) external angular momenta, (3) internal angular momenta, (4) physical characteristics, and (5) the stability of their equilibrium states. Several analytical solutions for the attitude motions of satellite models were compiled, checked, corrected in some minor respects and their short-term prediction capabilities were investigated. Single-rigid-body, dual-spin and multi-rotor configurations are treated. To verify the analytical models and to see how the true motion of a satellite which is acted upon by environmental torques differs from its corresponding torque-free motion, a numerical simulation code was developed. This code contains a relatively general satellite model and models for gravity-gradient and aerodynamic torques. The spacecraft physical model for the code and the equations of motion are given. The two environmental torque models are described.
Cannell, John; Jovic, Emelyn; Rathjen, Amy; Lane, Kylie; Tyson, Anna M; Callisaya, Michele L; Smith, Stuart T; Ahuja, Kiran DK; Bird, Marie-Louise
2017-01-01
Objective: To compare the efficacy of novel interactive, motion capture-rehabilitation software to usual care stroke rehabilitation on physical function. Design: Randomized controlled clinical trial. Setting: Two subacute hospital rehabilitation units in Australia. Participants: In all, 73 people less than six months after stroke with reduced mobility and clinician determined capacity to improve. Interventions: Both groups received functional retraining and individualized programs for up to an hour, on weekdays for 8–40 sessions (dose matched). For the intervention group, this individualized program used motivating virtual reality rehabilitation and novel gesture controlled interactive motion capture software. For usual care, the individualized program was delivered in a group class on one unit and by rehabilitation assistant 1:1 on the other. Main measures: Primary outcome was standing balance (functional reach). Secondary outcomes were lateral reach, step test, sitting balance, arm function, and walking. Results: Participants (mean 22 days post-stroke) attended mean 14 sessions. Both groups improved (mean (95% confidence interval)) on primary outcome functional reach (usual care 3.3 (0.6 to 5.9), intervention 4.1 (−3.0 to 5.0) cm) with no difference between groups (P = 0.69) on this or any secondary measures. No differences between the rehabilitation units were seen except in lateral reach (less affected side) (P = 0.04). No adverse events were recorded during therapy. Conclusion: Interactive, motion capture rehabilitation for inpatients post stroke produced functional improvements that were similar to those achieved by usual care stroke rehabilitation, safely delivered by either a physical therapist or a rehabilitation assistant. PMID:28719977
High speed multiphoton imaging
NASA Astrophysics Data System (ADS)
Li, Yongxiao; Brustle, Anne; Gautam, Vini; Cockburn, Ian; Gillespie, Cathy; Gaus, Katharina; Lee, Woei Ming
2016-12-01
Intravital multiphoton microscopy has emerged as a powerful technique to visualize cellular processes in-vivo. Real time processes revealed through live imaging provided many opportunities to capture cellular activities in living animals. The typical parameters that determine the performance of multiphoton microscopy are speed, field of view, 3D imaging and imaging depth; many of these are important to achieving data from in-vivo. Here, we provide a full exposition of the flexible polygon mirror based high speed laser scanning multiphoton imaging system, PCI-6110 card (National Instruments) and high speed analog frame grabber card (Matrox Solios eA/XA), which allows for rapid adjustments between frame rates i.e. 5 Hz to 50 Hz with 512 × 512 pixels. Furthermore, a motion correction algorithm is also used to mitigate motion artifacts. A customized control software called Pscan 1.0 is developed for the system. This is then followed by calibration of the imaging performance of the system and a series of quantitative in-vitro and in-vivo imaging in neuronal tissues and mice.
Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study
Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan
2017-01-01
The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications. PMID:28165382
4K x 2K pixel color video pickup system
NASA Astrophysics Data System (ADS)
Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou
1998-12-01
This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.
Satellite capture as a restricted 2 + 2 body problem
NASA Astrophysics Data System (ADS)
Kanaan, Wafaa; Farrelly, David; Lanchares, Víctor
2018-04-01
A restricted 2 + 2 body problem is proposed as a possible mechanism to explain the capture of small bodies by a planet. In particular, we consider two primaries revolving in a circular mutual orbit and two small bodies of equal mass, neither of which affects the motion of the primaries. If the small bodies are temporarily captured in the Hill sphere of the smaller primary, they may get close enough to each other to exchange energy in such a way that one of them becomes permanently captured. Numerical simulations show that capture is possible for both prograde and retrograde orbits.
NASA Astrophysics Data System (ADS)
Beigi, Parmida; Salcudean, Tim; Rohling, Robert; Lessoway, Victoria A.; Ng, Gary C.
2015-03-01
This paper presents a new needle detection technique for ultrasound guided interventions based on the spectral properties of small displacements arising from hand tremour or intentional motion. In a block-based approach, the displacement map is computed for each block of interest versus a reference frame, using an optical flow technique. To compute the flow parameters, the Lucas-Kanade approach is used in a multiresolution and regularized form. A least-squares fit is used to estimate the flow parameters from the overdetermined system of spatial and temporal gradients. Lateral and axial components of the displacement are obtained for each block of interest at consecutive frames. Magnitude-squared spectral coherency is derived between the median displacements of the reference block and each block of interest, to determine the spectral correlation. In vivo images were obtained from the tissue near the abdominal aorta to capture the extreme intrinsic body motion and insertion images were captured from a tissue-mimicking agar phantom. According to the analysis, both the involuntary and intentional movement of the needle produces coherent displacement with respect to a reference window near the insertion site. Intrinsic body motion also produces coherent displacement with respect to a reference window in the tissue; however, the coherency spectra of intrinsic and needle motion are distinguishable spectrally. Blocks with high spectral coherency at high frequencies are selected, estimating a channel for needle trajectory. The needle trajectory is detected from locally thresholded absolute displacement map within the initial estimate. Experimental results show the RMS localization accuracy of 1:0 mm, 0:7 mm, and 0:5 mm for hand tremour, vibrational and rotational needle movements, respectively.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
Spatial Attention and Audiovisual Interactions in Apparent Motion
ERIC Educational Resources Information Center
Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles
2007-01-01
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Balance in non-hydrostatic rotating stratified turbulence
NASA Astrophysics Data System (ADS)
McKiver, William J.; Dritschel, David G.
It is now well established that two distinct types of motion occur in geophysical turbulence: slow motions associated with potential vorticity advection and fast oscillations due to inertiamaster variable this is known as balance. In real geophysical flows, deviations from balance in the form of inertiaimbalance|N/f) where optimal potential vorticity balancenonlinear quasi-geostrophic balance’ procedure expands the equations of motion to second order in Rossby number but retains the exact (unexpanded) definition of potential vorticity. This proves crucial for obtaining an accurate estimate of balanced motions. In the analysis of rotating stratified turbulence at Ro1 and N/f1, this procedure captures a significantly greater fraction of the underlying balance than standard (linear) quasi-geostrophic balance (which is based on the linearized equations about a state of rest). Nonlinear quasi-geostrophic balance also compares well with optimal potential vorticity balance, which captures the greatest fraction of the underlying balance overall.More fundamentally, the results of these analyses indicate that balance dominates in carefully initialized simulations of freely decaying rotating stratified turbulence up to O(1) Rossby numbers when N/f1. The fluid motion exhibits important quasi-geostrophic features with, in particular, typical height-to-width scale ratios remaining comparable to f/N.
2006-03-01
strained, unusually tired, weak or out of breadth (as cited in Townley , Hair, & Strong, 2005). The data used in these trials yielded tables of maximum...stress when lifting objects near the floor (Chaffin, Andersson, & Martin, 1999). Townley et al. (2005) quantified lifting hazards by using a two...Nachemson, A. (1986). Back injuries in industry: A retrospective study, I. Overview and cost analysis. SPINE, 11, 241-245. Townley , A.C., Hair
Integration of a Motion Capture System into a Spacecraft Simulator for Real-Time Attitude Control
2016-08-16
Attitude Control* Benjamin L. Reifler University at Buffalo, Buffalo, New York 1st Lt Dylan R. Penn Air Force Research Laboratory, Kirtland Air Force...author was an intern at the Air Force Research Laboratory ( AFRL ) Space Vehicles Directorate. 1 DISTRIBUTION A. Approved for public release: distribution...expertise on this project. I would also like to thank the AFRL Scholars program for the opportunity to participate in this research. References [1
Human Engineering Modeling and Performance Lab Study Project
NASA Technical Reports Server (NTRS)
Oliva-Buisson, Yvette J.
2014-01-01
The HEMAP (Human Engineering Modeling and Performance) Lab is a joint effort between the Industrial and Human Engineering group and the KAVE (Kennedy Advanced Visualiations Environment) group. The lab consists of sixteen camera system that is used to capture human motions and operational tasks, through te use of a Velcro suit equipped with sensors, and then simulate these tasks in an ergonomic software package know as Jac, The Jack software is able to identify the potential risk hazards.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Virtual performer: single camera 3D measuring system for interaction in virtual space
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-10-01
The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models.
Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P H
2016-11-01
Depth sensor based 3D human motion estimation hardware such as Kinect has made interactive applications more popular recently. However, it is still challenging to accurately recognize postures from a single depth camera due to the inherently noisy data derived from depth images and self-occluding action performed by the user. In this paper, we propose a new real-time probabilistic framework to enhance the accuracy of live captured postures that belong to one of the action classes in the database. We adopt the Gaussian Process model as a prior to leverage the position data obtained from Kinect and marker-based motion capture system. We also incorporate a temporal consistency term into the optimization framework to constrain the velocity variations between successive frames. To ensure that the reconstructed posture resembles the accurate parts of the observed posture, we embed a set of joint reliability measurements into the optimization framework. A major drawback of Gaussian Process is its cubic learning complexity when dealing with a large database due to the inverse of a covariance matrix. To solve the problem, we propose a new method based on a local mixture of Gaussian Processes, in which Gaussian Processes are defined in local regions of the state space. Due to the significantly decreased sample size in each local Gaussian Process, the learning time is greatly reduced. At the same time, the prediction speed is enhanced as the weighted mean prediction for a given sample is determined by the nearby local models only. Our system also allows incrementally updating a specific local Gaussian Process in real time, which enhances the likelihood of adapting to run-time postures that are different from those in the database. Experimental results demonstrate that our system can generate high quality postures even under severe self-occlusion situations, which is beneficial for real-time applications such as motion-based gaming and sport training.
Reading, Stacey A; Prickett, Karel
2013-06-01
New-generation active videogames (AVGs) use motion-capture video cameras to connect a player's arm, leg, and body movements through three-dimensional space to on-screen activity. We sought to determine if the whole-body movements required to play the AVG elicited moderate-intensity physical activity (PA) in children. A secondary aim was to examine the utility of using accelerometry to measure the activity intensity of AVG play in this age group. The PA levels of boys (n=26) and girls (n=15) 5-12 years of age were measured by triaxial accelerometry (n=25) or accelerometry and indirect calorimetry (IC) (n=16) while playing the "Kinect Adventures!" videogame for the Xbox Kinect (Microsoft(®), Redmond, WA) gaming system. The experiment simulated a typical 20-minute in-home free-play gaming session. Using 10-second recording epochs, the average (mean±standard deviation) PA intensity over 20 minutes was 4.4±0.9, 3.2±0.7, and 3.3±0.6 metabolic equivalents (METs) when estimated by IC or vertical axis (Crouter et al. intermittent lifestyle equation for vertical axis counts/10 seconds [Cva2RM]) and vector magnitude (Crouter et al. intermittent lifestyle equation for vector magnitude counts/10 seconds [Cvm2RM]) accelerometry. In total, 16.9±3.2 (IC), 10.6±4.5 (Cva2RM), and 11.1±3.9 (Cvm2RM) minutes of game playing time were at a 3 MET intensity or higher. In this study, children played the Xbox Kinect AVG at moderate-intensity PA levels. The study also showed that current accelerometry-based methods underestimated the PA of AVG play compared with IC. With proper guidance and recommendations for use, video motion-capture AVG systems could reduce sedentary screen time and increase total daily moderate PA levels for children. Further study of these AVG systems is warranted.
A motion deblurring method with long/short exposure image pairs
NASA Astrophysics Data System (ADS)
Cui, Guangmang; Hua, Weiping; Zhao, Jufeng; Gong, Xiaoli; Zhu, Liyao
2018-01-01
In this paper, a motion deblurring method with long/short exposure image pairs is presented. The long/short exposure image pairs are captured for the same scene under different exposure time. The image pairs are treated as the input of the deblurring method and more information could be used to obtain a deblurring result with high image quality. Firstly, the luminance equalization process is carried out to the short exposure image. And the blur kernel is estimated with the image pair under the maximum a posteriori (MAP) framework using conjugate gradient algorithm. Then a L0 image smoothing based denoising method is applied to the luminance equalized image. And the final deblurring result is obtained with the gain controlled residual image deconvolution process with the edge map as the gain map. Furthermore, a real experimental optical system is built to capture the image pair in order to demonstrate the effectiveness of the proposed deblurring framework. The long/short image pairs are obtained under different exposure time and camera gain control. Experimental results show that the proposed method could provide a superior deblurring result in both subjective and objective assessment compared with other deblurring approaches.
Determining Underground Mining Work Postures Using Motion Capture and Digital Human Modeling
Lutz, Timothy J.; DuCarme, Joseph P.; Smith, Adam K.; Ambrose, Dean
2017-01-01
According to Mine Safety and Health Administration (MSHA) data, during 2008–2012 in the U.S., there were, on average, 65 lost-time accidents per year during routine mining and maintenance activities involving remote-controlled continuous mining machines (CMMs). To address this problem, the National Institute for Occupational Safety and Health (NIOSH) is currently investigating the implementation and integration of existing and emerging technologies in underground mines to provide automated, intelligent proximity detection (iPD) devices on CMMs. One research goal of NIOSH is to enhance the proximity detection system by improving its capability to track and determine identity, position, and posture of multiple workers, and to selectively disable machine functions to keep workers and machine operators safe. Posture of the miner can determine the safe working distance from a CMM by way of the variation in the proximity detection magnetic field. NIOSH collected and analyzed motion capture data and calculated joint angles of the back, hips, and knees from various postures on 12 human subjects. The results of the analysis suggests that lower body postures can be identified by observing the changes in joint angles of the right hip, left hip, right knee, and left knee. PMID:28626796
Tilting Styx and Nix but not Uranus with a Spin-Precession-Mean-motion resonance
NASA Astrophysics Data System (ADS)
Quillen, Alice C.; Chen, Yuan-Yuan; Noyelles, Benoît; Loane, Santiago
2018-02-01
A Hamiltonian model is constructed for the spin axis of a planet perturbed by a nearby planet with both planets in orbit about a star. We expand the planet-planet gravitational potential perturbation to first order in orbital inclinations and eccentricities, finding terms describing spin resonances involving the spin precession rate and the two planetary mean motions. Convergent planetary migration allows the spinning planet to be captured into spin resonance. With initial obliquity near zero, the spin resonance can lift the planet's obliquity to near 90° or 180° depending upon whether the spin resonance is first or zeroth order in inclination. Past capture of Uranus into such a spin resonance could give an alternative non-collisional scenario accounting for Uranus's high obliquity. However, we find that the time spent in spin resonance must be so long that this scenario cannot be responsible for Uranus's high obliquity. Our model can be used to study spin resonance in satellite systems. Our Hamiltonian model explains how Styx and Nix can be tilted to high obliquity via outward migration of Charon, a phenomenon previously seen in numerical simulations.
Capturing intraoperative deformations: research experience at Brigham and Women's Hospital.
Warfield, Simon K; Haker, Steven J; Talos, Ion-Florin; Kemper, Corey A; Weisenfeld, Neil; Mewes, Andrea U J; Goldberg-Zimring, Daniel; Zou, Kelly H; Westin, Carl-Fredrik; Wells, William M; Tempany, Clare M C; Golby, Alexandra; Black, Peter M; Jolesz, Ferenc A; Kikinis, Ron
2005-04-01
During neurosurgical procedures the objective of the neurosurgeon is to achieve the resection of as much diseased tissue as possible while achieving the preservation of healthy brain tissue. The restricted capacity of the conventional operating room to enable the surgeon to visualize critical healthy brain structures and tumor margin has lead, over the past decade, to the development of sophisticated intraoperative imaging techniques to enhance visualization. However, both rigid motion due to patient placement and nonrigid deformations occurring as a consequence of the surgical intervention disrupt the correspondence between preoperative data used to plan surgery and the intraoperative configuration of the patient's brain. Similar challenges are faced in other interventional therapies, such as in cryoablation of the liver, or biopsy of the prostate. We have developed algorithms to model the motion of key anatomical structures and system implementations that enable us to estimate the deformation of the critical anatomy from sequences of volumetric images and to prepare updated fused visualizations of preoperative and intraoperative images at a rate compatible with surgical decision making. This paper reviews the experience at Brigham and Women's Hospital through the process of developing and applying novel algorithms for capturing intraoperative deformations in support of image guided therapy.
NASA Astrophysics Data System (ADS)
Lin, Hsien-I.; Nguyen, Xuan-Anh
2017-05-01
To operate a redundant manipulator to accomplish the end-effector trajectory planning and simultaneously control its gesture in online programming, incorporating the human motion is a useful and flexible option. This paper focuses on a manipulative instrument that can simultaneously control its arm gesture and end-effector trajectory via human teleoperation. The instrument can be classified by two parts; first, for the human motion capture and data processing, marker systems are proposed to capture human gesture. Second, the manipulator kinematics control is implemented by an augmented multi-tasking method, and forward and backward reaching inverse kinematics, respectively. Especially, the local-solution and divergence problems of a multi-tasking method are resolved by the proposed augmented multi-tasking method. Computer simulations and experiments with a 7-DOF (degree of freedom) redundant manipulator were used to validate the proposed method. Comparison among the single-tasking, original multi-tasking, and augmented multi-tasking algorithms were performed and the result showed that the proposed augmented method had a good end-effector position accuracy and the most similar gesture to the human gesture. Additionally, the experimental results showed that the proposed instrument was realized online.
Male dance moves that catch a woman's eye
Neave, Nick; McCarty, Kristofor; Freynik, Jeanette; Caplan, Nicholas; Hönekopp, Johannes; Fink, Bernhard
2011-01-01
Male movements serve as courtship signals in many animal species, and may honestly reflect the genotypic and/or phenotypic quality of the individual. Attractive human dance moves, particularly those of males, have been reported to show associations with measures of physical strength, prenatal androgenization and symmetry. Here we use advanced three-dimensional motion-capture technology to identify possible biomechanical differences between women's perceptions of ‘good’ and ‘bad’ male dancers. Nineteen males were recorded using the ‘Vicon’ motion-capture system while dancing to a basic rhythm; controlled stimuli in the form of avatars were then created in the form of 15 s video clips, and rated by 39 females for dance quality. Initial analyses showed that 11 movement variables were significantly positively correlated with perceived dance quality. Linear regression subsequently revealed that three movement measures were key predictors of dance quality; these were variability and amplitude of movements of the neck and trunk, and speed of movements of the right knee. In summary, we have identified specific movements within men's dance that influence women's perceptions of dancing ability. We suggest that such movements may form honest signals of male quality in terms of health, vigour or strength, though this remains to be confirmed. PMID:20826469
Quantum hydrodynamics: capturing a reactive scattering resonance.
Derrickson, Sean W; Bittner, Eric R; Kendrick, Brian K
2005-08-01
The hydrodynamic equations of motion associated with the de Broglie-Bohm formulation of quantum mechanics are solved using a meshless method based upon a moving least-squares approach. An arbitrary Lagrangian-Eulerian frame of reference and a regridding algorithm which adds and deletes computational points are used to maintain a uniform and nearly constant interparticle spacing. The methodology also uses averaged fields to maintain unitary time evolution. The numerical instabilities associated with the formation of nodes in the reflected portion of the wave packet are avoided by adding artificial viscosity to the equations of motion. A new and more robust artificial viscosity algorithm is presented which gives accurate scattering results and is capable of capturing quantum resonances. The methodology is applied to a one-dimensional model chemical reaction that is known to exhibit a quantum resonance. The correlation function approach is used to compute the reactive scattering matrix, reaction probability, and time delay as a function of energy. Excellent agreement is obtained between the scattering results based upon the quantum hydrodynamic approach and those based upon standard quantum mechanics. This is the first clear demonstration of the ability of moving grid approaches to accurately and robustly reproduce resonance structures in a scattering system.
NASA Astrophysics Data System (ADS)
An, Lin; Shen, Tueng T.; Wang, Ruikang K.
2011-10-01
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.
Behavioral effect of knee joint motion on body's center of mass during human quiet standing.
Yamamoto, Akio; Sasagawa, Shun; Oba, Naoko; Nakazawa, Kimitaka
2015-01-01
The balance control mechanism during upright standing has often been investigated using single- or double-link inverted pendulum models, involving the ankle joint only or both the ankle and hip joints, respectively. Several studies, however, have reported that knee joint motion during quiet standing cannot be ignored. This study aimed to investigate the degree to which knee joint motion contributes to the center of mass (COM) kinematics during quiet standing. Eight healthy adults were asked to stand quietly for 30s on a force platform. Angular displacements and accelerations of the ankle, knee, and hip joints were calculated from kinematic data obtained by a motion capture system. We found that the amplitude of the angular acceleration was smallest in the ankle joint and largest in the hip joint (ankle < knee < hip). These angular accelerations were then substituted into three biomechanical models with or without the knee joint to estimate COM acceleration in the anterior-posterior direction. Although the "without-knee" models greatly overestimated the COM acceleration, the COM acceleration estimated by the "with-knee" model was similar to the actual acceleration obtained from force platform measurement. These results indicate substantial effects of knee joint motion on the COM kinematics during quiet standing. We suggest that investigations based on the multi-joint model, including the knee joint, are required to reveal the physiologically plausible balance control mechanism implemented by the central nervous system. Copyright © 2014 Elsevier B.V. All rights reserved.
Global velocity constrained cloud motion prediction for short-term solar forecasting
NASA Astrophysics Data System (ADS)
Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping
2016-09-01
Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.
Nakamura, S; Shimojo, S
2000-01-01
We investigated interactions between foreground and background stimuli during visually induced perception of self-motion (vection) by using a stimulus composed of orthogonally moving random-dot patterns. The results indicated that, when the foreground moves with a slower speed, a self-motion sensation with a component in the same direction as the foreground is induced. We named this novel component of self-motion perception 'inverted vection'. The robustness of inverted vection was confirmed using various measures of self-motion sensation and under different stimulus conditions. The mechanism underlying inverted vection is discussed with regard to potentially relevant factors, such as relative motion between the foreground and background, and the interaction between the mis-registration of eye-movement information and self-motion perception.
Moritsugu, Kei; Koike, Ryotaro; Yamada, Kouki; Kato, Hiroaki; Kidera, Akinori
2015-01-01
Molecular dynamics (MD) simulations of proteins provide important information to understand their functional mechanisms, which are, however, likely to be hidden behind their complicated motions with a wide range of spatial and temporal scales. A straightforward and intuitive analysis of protein dynamics observed in MD simulation trajectories is therefore of growing significance with the large increase in both the simulation time and system size. In this study, we propose a novel description of protein motions based on the hierarchical clustering of fluctuations in the inter-atomic distances calculated from an MD trajectory, which constructs a single tree diagram, named a “Motion Tree”, to determine a set of rigid-domain pairs hierarchically along with associated inter-domain fluctuations. The method was first applied to the MD trajectory of substrate-free adenylate kinase to clarify the usefulness of the Motion Tree, which illustrated a clear-cut dynamics picture of the inter-domain motions involving the ATP/AMP lid and the core domain together with the associated amplitudes and correlations. The comparison of two Motion Trees calculated from MD simulations of ligand-free and -bound glutamine binding proteins clarified changes in inherent dynamics upon ligand binding appeared in both large domains and a small loop that stabilized ligand molecule. Another application to a huge protein, a multidrug ATP binding cassette (ABC) transporter, captured significant increases of fluctuations upon binding a drug molecule observed in both large scale inter-subunit motions and a motion localized at a transmembrane helix, which may be a trigger to the subsequent structural change from inward-open to outward-open states to transport the drug molecule. These applications demonstrated the capabilities of Motion Trees to provide an at-a-glance view of various sizes of functional motions inherent in the complicated MD trajectory. PMID:26148295
Modelling Nonlinear Dynamic Textures using Hybrid DWT-DCT and Kernel PCA with GPU
NASA Astrophysics Data System (ADS)
Ghadekar, Premanand Pralhad; Chopade, Nilkanth Bhikaji
2016-12-01
Most of the real-world dynamic textures are nonlinear, non-stationary, and irregular. Nonlinear motion also has some repetition of motion, but it exhibits high variation, stochasticity, and randomness. Hybrid DWT-DCT and Kernel Principal Component Analysis (KPCA) with YCbCr/YIQ colour coding using the Dynamic Texture Unit (DTU) approach is proposed to model a nonlinear dynamic texture, which provides better results than state-of-art methods in terms of PSNR, compression ratio, model coefficients, and model size. Dynamic texture is decomposed into DTUs as they help to extract temporal self-similarity. Hybrid DWT-DCT is used to extract spatial redundancy. YCbCr/YIQ colour encoding is performed to capture chromatic correlation. KPCA is applied to capture nonlinear motion. Further, the proposed algorithm is implemented on Graphics Processing Unit (GPU), which comprise of hundreds of small processors to decrease time complexity and to achieve parallelism.
Development of esMOCA RULA, Motion Capture Instrumentation for RULA Assessment
NASA Astrophysics Data System (ADS)
Akhmad, S.; Arendra, A.
2018-01-01
The purpose of this research is to build motion capture instrumentation using sensors fusion accelerometer and gyroscope to assist in RULA assessment. Data processing of sensor orientation is done in every sensor node by digital motion processor. Nine sensors are placed in the upper limb of operator subject. Development of kinematics model is done with Simmechanic Simulink. This kinematics model receives streaming data from sensors via wireless sensors network. The output of the kinematics model is the relative angular angle between upper limb members and visualized on the monitor. This angular information is compared to the look-up table of the RULA worksheet and gives the RULA score. The assessment result of the instrument is compared with the result of the assessment by rula assessors. To sum up, there is no significant difference of assessment by the instrument with an assessment by an assessor.
NASA Astrophysics Data System (ADS)
Mosher, Stephen G.; Audet, Pascal; L'Heureux, Ivan
2014-07-01
Tectonic plate reorganization at a subduction zone edge is a fundamental process that controls oceanic plate fragmentation and capture. However, the various factors responsible for these processes remain elusive. We characterize seismic anisotropy of the upper mantle in the Explorer region at the northern limit of the Cascadia subduction zone from teleseismic shear wave splitting measurements. Our results show that the mantle flow field beneath the Explorer slab is rotating anticlockwise from the convergence-parallel motion between the Juan de Fuca and the North America plates, re-aligning itself with the transcurrent motion between the Pacific and North America plates. We propose that oceanic microplate fragmentation is driven by slab stretching, thus reorganizing the mantle flow around the slab edge and further contributing to slab weakening and increase in buoyancy, eventually leading to cessation of subduction and microplate capture.
Pu, Xianjie; Guo, Hengyu; Chen, Jie; Wang, Xue; Xi, Yi; Hu, Chenguo; Wang, Zhong Lin
2017-07-01
Mechnosensational human-machine interfaces (HMIs) can greatly extend communication channels between human and external devices in a natural way. The mechnosensational HMIs based on biopotential signals have been developing slowly owing to the low signal-to-noise ratio and poor stability. In eye motions, the corneal-retinal potential caused by hyperpolarization and depolarization is very weak. However, the mechanical micromotion of the skin around the corners of eyes has never been considered as a good trigger signal source. We report a novel triboelectric nanogenerator (TENG)-based micromotion sensor enabled by the coupling of triboelectricity and electrostatic induction. By using an indium tin oxide electrode and two opposite tribomaterials, the proposed flexible and transparent sensor is capable of effectively capturing eye blink motion with a super-high signal level (~750 mV) compared with the traditional electrooculogram approach (~1 mV). The sensor is fixed on a pair of glasses and applied in two real-time mechnosensational HMIs-the smart home control system and the wireless hands-free typing system with advantages of super-high sensitivity, stability, easy operation, and low cost. This TENG-based micromotion sensor is distinct and unique in its fundamental mechanism, which provides a novel design concept for intelligent sensor technique and shows great potential application in mechnosensational HMIs.
Pu, Xianjie; Guo, Hengyu; Chen, Jie; Wang, Xue; Xi, Yi; Hu, Chenguo; Wang, Zhong Lin
2017-01-01
Mechnosensational human-machine interfaces (HMIs) can greatly extend communication channels between human and external devices in a natural way. The mechnosensational HMIs based on biopotential signals have been developing slowly owing to the low signal-to-noise ratio and poor stability. In eye motions, the corneal-retinal potential caused by hyperpolarization and depolarization is very weak. However, the mechanical micromotion of the skin around the corners of eyes has never been considered as a good trigger signal source. We report a novel triboelectric nanogenerator (TENG)–based micromotion sensor enabled by the coupling of triboelectricity and electrostatic induction. By using an indium tin oxide electrode and two opposite tribomaterials, the proposed flexible and transparent sensor is capable of effectively capturing eye blink motion with a super-high signal level (~750 mV) compared with the traditional electrooculogram approach (~1 mV). The sensor is fixed on a pair of glasses and applied in two real-time mechnosensational HMIs—the smart home control system and the wireless hands-free typing system with advantages of super-high sensitivity, stability, easy operation, and low cost. This TENG-based micromotion sensor is distinct and unique in its fundamental mechanism, which provides a novel design concept for intelligent sensor technique and shows great potential application in mechnosensational HMIs. PMID:28782029
Analytical formulation of selected activities of the remote manipulator system
NASA Technical Reports Server (NTRS)
Zimmerman, K. J.
1977-01-01
Existing analysis of Orbiter-RMS-Payload kinematics were surveyed, including equations dealing with the two body kinematics in the presence of a massless RMS and compares analytical explicit solutions with numerical solutions. For the following operational phases of the RMS numerical demonstration, problems are provided: (1) payload capture; (2) payload stowage and removal from cargo bay; and (3) payload deployment. The equation of motion provided accounted for RMS control forces and torque moments and could be extended to RMS flexibility and control loop simulation without increasing the degrees of freedom of the two body system.
Hand Grasping Synergies As Biometrics.
Patel, Vrajeshri; Thukral, Poojita; Burns, Martin K; Florescu, Ionut; Chandramouli, Rajarathnam; Vinjamuri, Ramana
2017-01-01
Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements). Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic). Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies) from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies-postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security.
Diffusion of multiple species with excluded-volume effects.
Bruna, Maria; Chapman, S Jonathan
2012-11-28
Stochastic models of diffusion with excluded-volume effects are used to model many biological and physical systems at a discrete level. The average properties of the population may be described by a continuum model based on partial differential equations. In this paper we consider multiple interacting subpopulations/species and study how the inter-species competition emerges at the population level. Each individual is described as a finite-size hard core interacting particle undergoing brownian motion. The link between the discrete stochastic equations of motion and the continuum model is considered systematically using the method of matched asymptotic expansions. The system for two species leads to a nonlinear cross-diffusion system for each subpopulation, which captures the enhancement of the effective diffusion rate due to excluded-volume interactions between particles of the same species, and the diminishment due to particles of the other species. This model can explain two alternative notions of the diffusion coefficient that are often confounded, namely collective diffusion and self-diffusion. Simulations of the discrete system show good agreement with the analytic results.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Hockey, iPads, and Projectile Motion in a Physics Classroom
ERIC Educational Resources Information Center
Hechter, Richard P.
2013-01-01
With the increased availability of modern technology and handheld probeware for classrooms, the iPad and the Video Physics application developed by Vernier are used to capture and analyze the motion of an ice hockey puck within secondary-level physics education. Students collect, analyze, and generate digital modes of representation of physics…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ketchum, Jacob A.; Adams, Fred C.; Bloch, Anthony M.
2011-01-01
Pairs of migrating extrasolar planets often lock into mean motion resonance as they drift inward. This paper studies the convergent migration of giant planets (driven by a circumstellar disk) and determines the probability that they are captured into mean motion resonance. The probability that such planets enter resonance depends on the type of resonance, the migration rate, the eccentricity damping rate, and the amplitude of the turbulent fluctuations. This problem is studied both through direct integrations of the full three-body problem and via semi-analytic model equations. In general, the probability of resonance decreases with increasing migration rate, and with increasingmore » levels of turbulence, but increases with eccentricity damping. Previous work has shown that the distributions of orbital elements (eccentricity and semimajor axis) for observed extrasolar planets can be reproduced by migration models with multiple planets. However, these results depend on resonance locking, and this study shows that entry into-and maintenance of-mean motion resonance depends sensitively on the migration rate, eccentricity damping, and turbulence.« less
NASA Astrophysics Data System (ADS)
Mousas, Christos; Anagnostopoulos, Christos-Nikolaos
2017-06-01
This paper presents a hybrid character control interface that provides the ability to synthesize in real-time a variety of actions based on the user's performance capture. The proposed methodology enables three different performance interaction modules: the performance animation control that enables the direct mapping of the user's pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recognition methodology, and the hybrid control that lies within the performance animation and the motion controller. With the methodology presented, the user will have the freedom to interact within the virtual environment, as well as the ability to manipulate the character and to synthesize a variety of actions that cannot be performed directly by him/her, but which the system synthesizes. Therefore, the user is able to interact with the virtual environment in a more sophisticated fashion. This paper presents examples of different scenarios based on the three different full-body character control methodologies.
Three-dimensional hysteresis compensation enhances accuracy of robotic artificial muscles
NASA Astrophysics Data System (ADS)
Zhang, Jun; Simeonov, Anthony; Yip, Michael C.
2018-03-01
Robotic artificial muscles are compliant and can generate straight contractions. They are increasingly popular as driving mechanisms for robotic systems. However, their strain and tension force often vary simultaneously under varying loads and inputs, resulting in three-dimensional hysteretic relationships. The three-dimensional hysteresis in robotic artificial muscles poses difficulties in estimating how they work and how to make them perform designed motions. This study proposes an approach to driving robotic artificial muscles to generate designed motions and forces by modeling and compensating for their three-dimensional hysteresis. The proposed scheme captures the nonlinearity by embedding two hysteresis models. The effectiveness of the model is confirmed by testing three popular robotic artificial muscles. Inverting the proposed model allows us to compensate for the hysteresis among temperature surrogate, contraction length, and tension force of a shape memory alloy (SMA) actuator. Feedforward control of an SMA-actuated robotic bicep is demonstrated. This study can be generalized to other robotic artificial muscles, thus enabling muscle-powered machines to generate desired motions.
NASA Astrophysics Data System (ADS)
Teo, Adrian J. T.; Li, Holden; Tan, Say Hwa; Yoon, Yong-Jin
2017-06-01
Optical MEMS devices provide fast detection, electromagnetic resilience and high sensitivity. Using this technology, an optical gratings based accelerometer design concept was developed for seismic motion detection purposes that provides miniaturization, high manufacturability, low costs and high sensitivity. Detailed in-house fabrication procedures of a double-sided deep reactive ion etching (DRIE) on a silicon-on-insulator (SOI) wafer for a micro opto electro mechanical system (MOEMS) device are presented and discussed. Experimental results obtained show that the conceptual device successfully captured motion similar to a commercial accelerometer with an average sensitivity of 13.6 mV G-1, and a highest recorded sensitivity of 44.1 mV G-1. A noise level of 13.5 mV was detected due to experimental setup limitations. This is the first MOEMS accelerometer developed using double-sided DRIE on SOI wafer for the application of seismic motion detection, and is a breakthrough technology platform to open up options for lower cost MOEMS devices.
NASA Astrophysics Data System (ADS)
Chakrabarty, Ayan; Wang, Feng; Joshi, Bhuwan; Wei, Qi-Huo
2011-03-01
Recent studies shows that the boomerang shaped molecules can form various kinds of liquid crystalline phases. One debated topic related to boomerang molecules is the existence of biaxial nematic liquid crystalline phase. Developing and optical microscopic studies of colloidal systems of boomerang particles would allow us to gain better understanding of orientation ordering and dynamics at ``single molecule'' level. Here we report the fabrication and experimental studies of the Brownian motion of individual boomerang colloidal particles confined between two glass plates. We used dark-field optical microscopy to directly visualize the Brownian motion of the single colloidal particles in a quasi two dimensional geometry. An EMCCD was used to capture the motion in real time. An indigenously developed imaging processing algorithm based on MatLab program was used to precisely track the position and orientation of the particles with sub-pixel accuracy. The experimental finding of the Brownian diffusion of a single boomerang colloidal particle will be discussed.
Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras
Kane, Suzanne Amador; Zamani, Marjon
2014-01-01
This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144
Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.
Kane, Suzanne Amador; Zamani, Marjon
2014-01-15
This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
Virtual Exercise Training Software System
NASA Technical Reports Server (NTRS)
Vu, L.; Kim, H.; Benson, E.; Amonette, W. E.; Barrera, J.; Perera, J.; Rajulu, S.; Hanson, A.
2018-01-01
The purpose of this study was to develop and evaluate a virtual exercise training software system (VETSS) capable of providing real-time instruction and exercise feedback during exploration missions. A resistive exercise instructional system was developed using a Microsoft Kinect depth-camera device, which provides markerless 3-D whole-body motion capture at a small form factor and minimal setup effort. It was hypothesized that subjects using the newly developed instructional software tool would perform the deadlift exercise with more optimal kinematics and consistent technique than those without the instructional software. Following a comprehensive evaluation in the laboratory, the system was deployed for testing and refinement in the NASA Extreme Environment Mission Operations (NEEMO) analog.
Voyager and the origin of the solar system
NASA Technical Reports Server (NTRS)
Prentice, A. J. R.
1981-01-01
A unified model for the formation of regular satellite systems and the planetary system is outlined. The basis for this modern Laplacian theory is that there existed a large supersonic turbulent stress arising from overshooting convective motions within the three primitive gaseous clouds which formed Jupiter, Saturn, and the Sun. Calculations show that if each cloud possessed the same fraction of supersonic turbulent energy, equal to about 5% of the cloud's gravitational potential energy, then the broad mass distribution and chemistry of all regular satellite and planetary systems can be simultaneously accounted for. Titan is probably a captured moon of Saturn. Several predictions about observations made by Voyager 2 at Saturn are presented.
Femtosecond crystallography with ultrabright electrons and x-rays: capturing chemistry in action.
Miller, R J Dwayne
2014-03-07
With the recent advances in ultrabright electron and x-ray sources, it is now possible to extend crystallography to the femtosecond time domain to literally light up atomic motions involved in the primary processes governing structural transitions. This review chronicles the development of brighter and brighter electron and x-ray sources that have enabled atomic resolution to structural dynamics for increasingly complex systems. The primary focus is on achieving sufficient brightness using pump-probe protocols to resolve the far-from-equilibrium motions directing chemical processes that in general lead to irreversible changes in samples. Given the central importance of structural transitions to conceptualizing chemistry, this emerging field has the potential to significantly improve our understanding of chemistry and its connection to driving biological processes.
[Study on an Exoskeleton Hand Function Training Device].
Hu, Xin; Zhang, Ying; Li, Jicai; Yi, Jinhua; Yu, Hongliu; He, Rongrong
2016-02-01
Based on the structure and motion bionic principle of the normal adult fingers, biological characteristics of human hands were analyzed, and a wearable exoskeleton hand function training device for the rehabilitation of stroke patients or patients with hand trauma was designed. This device includes the exoskeleton mechanical structure and the electromyography (EMG) control system. With adjustable mechanism, the device was capable to fit different finger lengths, and by capturing the EMG of the users' contralateral limb, the motion state of the exoskeleton hand was controlled. Then driven by the device, the user's fingers conducting adduction/abduction rehabilitation training was carried out. Finally, the mechanical properties and training effect of the exoskeleton hand were verified through mechanism simulation and the experiments on the experimental prototype of the wearable exoskeleton hand function training device.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
2009-08-01
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD allows hands-free, intuitive control of the laparoscopic field, independent of the captured image.
The capture of lunar materials in low lunar orbit
NASA Technical Reports Server (NTRS)
Floyd, M. A.
1981-01-01
A scenario is presented for the retrieval of lunar materials sent into lunar orbit to be used as raw materials in space manufacturing operations. The proposal is based on the launch of material from the lunar surface by an electromagnetic mass driver and the capture of this material in low lunar orbit by a fleet of mass catchers which ferry the material to processing facilities when full. Material trajectories are analyzed using the two-body equations of motion, and intercept requirements and the sensitivity of the system to launch errors are determined. The present scenario is shown to be superior to scenarios that place a single mass catcher at the L2 libration point due to increased operations flexibility, decreased mass driver performance requirements and centralized catcher servicing.
Thermodynamics and computation during collective motion near criticality
NASA Astrophysics Data System (ADS)
Crosato, Emanuele; Spinney, Richard E.; Nigmatullin, Ramil; Lizier, Joseph T.; Prokopenko, Mikhail
2018-01-01
We study self-organization of collective motion as a thermodynamic phenomenon in the context of the first law of thermodynamics. It is expected that the coherent ordered motion typically self-organises in the presence of changes in the (generalized) internal energy and of (generalized) work done on, or extracted from, the system. We aim to explicitly quantify changes in these two quantities in a system of simulated self-propelled particles and contrast them with changes in the system's configuration entropy. In doing so, we adapt a thermodynamic formulation of the curvatures of the internal energy and the work, with respect to two parameters that control the particles' alignment. This allows us to systematically investigate the behavior of the system by varying the two control parameters to drive the system across a kinetic phase transition. Our results identify critical regimes and show that during the phase transition, where the configuration entropy of the system decreases, the rates of change of the work and of the internal energy also decrease, while their curvatures diverge. Importantly, the reduction of entropy achieved through expenditure of work is shown to peak at criticality. We relate this both to a thermodynamic efficiency and the significance of the increased order with respect to a computational path. Additionally, this study provides an information-geometric interpretation of the curvature of the internal energy as the difference between two curvatures: the curvature of the free entropy, captured by the Fisher information, and the curvature of the configuration entropy.
Feng, Yongqiang; Max, Ludo
2014-01-01
Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484
2014-06-01
motion capture data used to determine position and orientation of a Soldier’s head, turret and the M2 machine gun • Controlling and acquiring user/weapon...data from the M2 simulation machine gun • Controlling paintball guns used to fire at the GPK during an experimental run • Sending and receiving TCP...Mounted, Armor/Cavalry, Combat Engineers, Field Artillery Cannon Crewmember, or MP duty assignment – Currently M2 .50 Caliber Machine Gun qualified
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W [Albuquerque, NM; Jordan, Jay D [Albuquerque, NM; Kim, Theodore J [Albuquerque, NM
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Rezaeian, Sanaz; Zhong, Peng; Hartzell, Stephen; Zareian, Farzin
2015-01-01
Simulated earthquake ground motions can be used in many recent engineering applications that require time series as input excitations. However, applicability and validation of simulations are subjects of debate in the seismological and engineering communities. We propose a validation methodology at the waveform level and directly based on characteristics that are expected to influence most structural and geotechnical response parameters. In particular, three time-dependent validation metrics are used to evaluate the evolving intensity, frequency, and bandwidth of a waveform. These validation metrics capture nonstationarities in intensity and frequency content of waveforms, making them ideal to address nonlinear response of structural systems. A two-component error vector is proposed to quantify the average and shape differences between these validation metrics for a simulated and recorded ground-motion pair. Because these metrics are directly related to the waveform characteristics, they provide easily interpretable feedback to seismologists for modifying their ground-motion simulation models. To further simplify the use and interpretation of these metrics for engineers, it is shown how six scalar key parameters, including duration, intensity, and predominant frequency, can be extracted from the validation metrics. The proposed validation methodology is a step forward in paving the road for utilization of simulated ground motions in engineering practice and is demonstrated using examples of recorded and simulated ground motions from the 1994 Northridge, California, earthquake.
Ibata, Yuki; Kitamura, Seiji; Motoi, Kosuke; Sagawa, Koichi
2013-01-01
The measurement method of three-dimensional posture and flying trajectory of lower body during jumping motion using body-mounted wireless inertial measurement units (WIMU) is introduced. The WIMU is composed of three-dimensional (3D) accelerometer and gyroscope of two kinds with different dynamic range and one 3D geomagnetic sensor to adapt to quick movement. Three WIMUs are mounted under the chest, right thigh and right shank. Thin film pressure sensors are connected to the shank WIMU and are installed under right heel and tiptoe to distinguish the state of the body motion between grounding and jumping. Initial and final postures of trunk, thigh and shank at standing-still are obtained using gravitational acceleration and geomagnetism. The posture of body is determined using the 3D direction of each segment updated by the numerical integration of angular velocity. Flying motion is detected from pressure sensors and 3D flying trajectory is derived by the double integration of trunk acceleration applying the 3D velocity of trunk at takeoff. Standing long jump experiments are performed and experimental results show that the joint angle and flying trajectory agree with the actual motion measured by the optical motion capture system.
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
Tey, Chuang-Kit; An, Jinyoung; Chung, Wan-Young
2017-01-01
Chronic obstructive pulmonary disease is a type of lung disease caused by chronically poor airflow that makes breathing difficult. As a chronic illness, it typically worsens over time. Therefore, pulmonary rehabilitation exercises and patient management for extensive periods of time are required. This paper presents a remote rehabilitation system for a multimodal sensors-based application for patients who have chronic breathing difficulties. The process involves the fusion of sensory data-captured motion data by stereo-camera and photoplethysmogram signal by a wearable PPG sensor-that are the input variables of a detection and evaluation framework. In addition, we incorporated a set of rehabilitation exercises specific for pulmonary patients into the system by fusing sensory data. Simultaneously, the system also features medical functions that accommodate the needs of medical professionals and those which ease the use of the application for patients, including exercises for tracking progress, patient performance, exercise assignments, and exercise guidance. Finally, the results indicate the accurate determination of pulmonary exercises from the fusion of sensory data. This remote rehabilitation system provides a comfortable and cost-effective option in the healthcare rehabilitation system.
Foot segmental motion and coupling in stage II and III tibialis posterior tendon dysfunction.
Van de Velde, Maarten; Matricali, Giovanni Arnoldo; Wuite, Sander; Roels, Charlotte; Staes, Filip; Deschamps, Kevin
2017-06-01
Classification systems developed in the field of posterior tibialis tendon dysfunction omit to include dynamic measurements. Since this may negatively affect the selection of the most appropriate treatment modality, studies on foot kinematics are highly recommended. Previous research characterised the foot kinematics in patients with posterior tibialis tendon dysfunction. However, none of the studies analysed foot segmental motion synchrony during stance phase, nor compared the kinematic behaviour of the foot in presence of different posterior tibialis tendon dysfunction stages. Therefore, we aimed at comparing foot segmental motion and coupling in patients with posterior tibialis tendon dysfunction grade 2 and 3 to those of asymptomatic subjects. Foot segmental motion of 11 patients suffering from posterior tibialis tendon dysfunction stage 2, 4 patients with posterior tibialis tendon dysfunction stage 3 and 15 asymptomatic subjects was objectively quantified with the Rizzoli foot model using an instrumented walkway and a 3D passive motion capture system. Dependent variables were the range of motion occurring at the different inter-segment angles during subphases of stance and swing phase as well as the cross-correlation coefficient between a number of segments. Significant differences in range of motion were predominantly found during the forefoot push off phase and swing phase. In general, both patient cohorts demonstrated a reduced range of motion compared to the control group. This hypomobility occurred predominantly in the rearfoot and midfoot (p<0.01). Significant differences between both posterior tibialis tendon dysfunction patient cohorts were not revealed. Cross-correlation coefficients highlighted a loss of joint coupling between rearfoot and tibia as well as between rearfoot and forefoot in both posterior tibialis tendon dysfunction groups. The current evidence reveals considerable mechanical alterations in the foot which should be considered in the decision making process since it may help explaining the success and failure of certain conservative and surgical interventions. Copyright © 2017 Elsevier Ltd. All rights reserved.
In vivo validation of patellofemoral kinematics during overground gait and stair ascent.
Pitcairn, Samuel; Lesniak, Bryson; Anderst, William
2018-06-18
The patellofemoral (PF) joint is a common site for non-specific anterior knee pain. The pathophysiology of patellofemoral pain may be related to abnormal motion of the patella relative to the femur, leading to increased stress at the patellofemoral joint. Patellofemoral motion cannot be accurately measured using conventional motion capture. The aim of this study was to determine the accuracy of a biplane radiography system for measuring in vivo PF motion during walking and stair ascent. Four subjects had three 1.0 mm diameter tantalum beads implanted into the patella. Participants performed three trials each of over ground walking and stair ascent while biplane radiographs were collected at 100 Hz. Patella motion was tracked using radiostereophotogrammetric analysis (RSA) as a "gold standard", and compared to a volumetric CT model-based tracking algorithm that matched digitally reconstructed radiographs to the original biplane radiographs. The average RMS difference between the RSA and model-based tracking was 0.41 mm and 1.97° when there was no obstruction from the contralateral leg. These differences increased by 34% and 40%, respectively, when the patella was at least partially obstructed by the contralateral leg. The average RMS difference in patellofemoral joint space between tracking methods was 0.9 mm or less. Previous validations of biplane radiographic systems have estimated tracking accuracy by moving cadaveric knees through simulated motions. These validations were unable to replicate in vivo kinematics, including patella motion due to muscle activation, and failed to assess the imaging and tracking challenges related to contralateral limb obstruction. By replicating the muscle contraction, movement velocity, joint range of motion, and obstruction of the patella by the contralateral limb, the present study provides a realistic estimate of patellofemoral tracking accuracy for future in vivo studies. Copyright © 2018 Elsevier B.V. All rights reserved.
Kerzel, Dirk
2003-05-01
Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.
A device for testing the dynamic performance of in situ force plates.
East, Rebecca H; Noble, Jonathan J; Arscott, Richard A; Shortland, Adam P
2017-09-01
Force plates are often incorporated into motion capture systems for the calculation of joint kinetic variables and other data. This project aimed to create a system that could be used to check the dynamic performance of force plate in situ. The proposed solution involved the design and development of an eccentrically loaded wheel mounted on a weighted frame. The frame was designed to hold a wheel mounted in two orthogonal positions. The wheel was placed on the force plate and spun. A VICON™ motion analysis system captured the positional data of the markers placed around the rim of the wheel which was used to create a simulated force profile, and the force profile was dependent on spin speed. The root mean square error between the simulated force profile and the force plate measurement was calculated. For nine trials conducted, the root mean square error between the two simultaneous measures of force was calculated. The difference between the force profiles in the x- and y-directions is approximately 2%. The difference in the z-direction was under 0.5%. The eccentrically loaded wheel produced a predictable centripetal force in the plane of the wheel which varied in direction as the wheel was spun and magnitude dependent on the spin speed. There are three important advantages to the eccentrically loaded wheel: (1) it does not rely on force measurements made from other devices, (2) the tests require only 15 min to complete per force plate and (3) the forces exerted on the plate are similar to those of paediatric gait.