Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion
Filippeschi, Alessandro; Schmitz, Norbert; Miezal, Markus; Bleser, Gabriele; Ruffaldi, Emanuele; Stricker, Didier
2017-01-01
Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error). PMID:28587178
Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing
NASA Astrophysics Data System (ADS)
Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
Human motion analysis with detection of subpart deformations
NASA Astrophysics Data System (ADS)
Wang, Juhui; Lorette, Guy; Bouthemy, Patrick
1992-06-01
One essential constraint used in 3-D motion estimation from optical projections is the rigidity assumption. Because of muscle deformations in human motion, this rigidity requirement is often violated for some regions on the human body. Global methods usually fail to bring stable solutions. This paper presents a model-based approach to combating the effect of muscle deformations in human motion analysis. The approach developed is based on two main stages. In the first stage, the human body is partitioned into different areas, where each area is consistent with a general motion model (not necessarily corresponding to a physical existing motion pattern). In the second stage, the regions are eliminated under the hypothesis that they are not induced by a specific human motion pattern. Each hypothesis is generated by making use of specific knowledge about human motion. A global method is used to estimate the 3-D motion parameters in basis of valid segments. Experiments based on a cycling motion sequence are presented.
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
Modal-Power-Based Haptic Motion Recognition
NASA Astrophysics Data System (ADS)
Kasahara, Yusuke; Shimono, Tomoyuki; Kuwahara, Hiroaki; Sato, Masataka; Ohnishi, Kouhei
Motion recognition based on sensory information is important for providing assistance to human using robots. Several studies have been carried out on motion recognition based on image information. However, in the motion of humans contact with an object can not be evaluated precisely by image-based recognition. This is because the considering force information is very important for describing contact motion. In this paper, a modal-power-based haptic motion recognition is proposed; modal power is considered to reveal information on both position and force. Modal power is considered to be one of the defining features of human motion. A motion recognition algorithm based on linear discriminant analysis is proposed to distinguish between similar motions. Haptic information is extracted using a bilateral master-slave system. Then, the observed motion is decomposed in terms of primitive functions in a modal space. The experimental results show the effectiveness of the proposed method.
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
Example-based human motion denoising.
Lou, Hui; Chai, Jinxiang
2010-01-01
With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.
The relationship between human field motion and preferred visible wavelengths.
Benedict, S C; Burge, J M
1990-01-01
The purpose of this study was to investigate the relationship between human field motion and preferred visible wavelengths. The study was based on the principle of resonancy from Rogers' science of unitary human beings; 201 subjects were tested using a modified version of Ference's human field motion test (HFMT). Two matrices of color were projected to provide an environment for the measurement of preferred visible wavelengths. There was no statistically significant relationship (r = 0.0387, p = 0.293) between scores on the human field motion test and preferred visible wavelengths as measured in nanometers. The Rogerian concept of accelerated human field rhythms being coordinate with higher frequency environment patterns was not supported in this study. Questions concerning the validity of the HFMT were expressed and were based upon the ambiguity of the terminology of the instrument and the lack of understanding of the concepts used to describe human field motion. Recommendations include the use of other methods to study Rogers' framework, and the development of other instrumentation to measure human field motion.
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-01-01
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-12-30
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.
Dynamic motion planning of 3D human locomotion using gradient-based optimization.
Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G
2008-06-01
Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.
Human silhouette matching based on moment invariants
NASA Astrophysics Data System (ADS)
Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi
2005-07-01
This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.
WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection.
Gong, Liangyi; Yang, Wu; Man, Dapeng; Dong, Guozhong; Yu, Miao; Lv, Jiguang
2015-12-21
With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR) and long-term averaged variance ratio (LVR). We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate.
WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection †
Gong, Liangyi; Yang, Wu; Man, Dapeng; Dong, Guozhong; Yu, Miao; Lv, Jiguang
2015-01-01
With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR) and long-term averaged variance ratio (LVR). We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate. PMID:26703612
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.
Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik
2016-01-01
Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.
Vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, S. R.; Sullivan, R. B.; Young, L. R.
1986-01-01
The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
NASA Astrophysics Data System (ADS)
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Vakanski, A; Ferguson, JM; Lee, S
2016-01-01
Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1993-01-01
In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1990-01-01
In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals
Czuba, Thaddeus B.; Cormack, Lawrence K.; Huk, Alexander C.
2016-01-01
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no “cross-cue” adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how—or indeed if—these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. PMID:27798134
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.
Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2016-10-19
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. Copyright © 2016 the authors 0270-6474/16/3610791-12$15.00/0.
Human Perception of Ambiguous Inertial Motion Cues
NASA Technical Reports Server (NTRS)
Zhang, Guan-Lu
2010-01-01
Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that the GIF resolution hypothesis is completely valid for non-rotational periodic motions. Additionally, human perception of translation is impaired without visual or spatial reference. The performance of ground-base subjects in estimating tilt after brief training is comparable with that of crewmembers without training.
Triboelectrification based motion sensor for human-machine interfacing.
Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin
2014-05-28
We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.
Human Activity Modeling and Simulation with High Biofidelity
2013-01-01
Human activity Modeling and Simulation (M&S) plays an important role in simulation-based training and Virtual Reality (VR). However, human activity M...kinematics and motion mapping/creation; and (e) creation and replication of human activity in 3-D space with true shape and motion. A brief review is
NASA Technical Reports Server (NTRS)
Lee, A. T.; Bussolari, S. R.
1986-01-01
The effect of motion platform systems on pilot behavior is considered with emphasis placed on civil aviation applications. A dynamic model for human spatial orientation based on the physiological structure and function of the human vestibular system is presented. Motion platform alternatives were evaluated on the basis of the following motion platform conditions: motion with six degrees-of-freedom required for Phase II simulators and two limited motion conditions. Consideration was given to engine flameout, airwork, and approach and landing scenarios.
Motion based parsing for video from observational psychology
NASA Astrophysics Data System (ADS)
Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray
2006-01-01
In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Discomfort Evaluation of Truck Ingress/Egress Motions Based on Biomechanical Analysis
Choi, Nam-Chul; Lee, Sang Hun
2015-01-01
This paper presents a quantitative discomfort evaluation method based on biomechanical analysis results for human body movement, as well as its application to an assessment of the discomfort for truck ingress and egress. In this study, the motions of a human subject entering and exiting truck cabins with different types, numbers, and heights of footsteps were first measured using an optical motion capture system and load sensors. Next, the maximum voluntary contraction (MVC) ratios of the muscles were calculated through a biomechanical analysis of the musculoskeletal human model for the captured motion. Finally, the objective discomfort was evaluated using the proposed discomfort model based on the MVC ratios. To validate this new discomfort assessment method, human subject experiments were performed to investigate the subjective discomfort levels through a questionnaire for comparison with the objective discomfort levels. The validation results showed that the correlation between the objective and subjective discomforts was significant and could be described by a linear regression model. PMID:26067194
Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator
NASA Astrophysics Data System (ADS)
Rehmatullah, Faizan
In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.
Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Fan, Rukun; Geng, Weidong
We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less
NASA Astrophysics Data System (ADS)
Takasugi, Shoji; Yamamoto, Tomohito; Muto, Yumiko; Abe, Hiroyuki; Miyake, Yoshihiro
The purpose of this study is to clarify the effects of timing control of utterance and body motion in human-robot interaction. Our previous study has already revealed the correlation of timing of utterance and body motion in human-human communication. Here we proposed a timing control model based on our previous research and estimated its influence to realize human-like communication using a questionnaire method. The results showed that the difference of effectiveness between the communication with the timing control model and that without it was observed. In addition, elderly people evaluated the communication with timing control much higher than younger people. These results show not only the importance of timing control of utterance and body motion in human communication but also its effectiveness for realizing human-like human-robot interaction.
NASA Astrophysics Data System (ADS)
Kiso, Atsushi; Seki, Hirokazu
This paper describes a method for discriminating of the human forearm motions based on the myoelectric signals using an adaptive fuzzy inference system. In conventional studies, the neural network is often used to estimate motion intention by the myoelectric signals and realizes the high discrimination precision. On the other hand, this study uses the fuzzy inference for a human forearm motion discrimination based on the myoelectric signals. This study designs the membership function and the fuzzy rules using the average value and the standard deviation of the root mean square of the myoelectric potential for every channel of each motion. In addition, the characteristics of the myoelectric potential gradually change as a result of the muscle fatigue. Therefore, the motion discrimination should be performed by taking muscle fatigue into consideration. This study proposes a method to redesign the fuzzy inference system such that dynamic change of the myoelectric potential because of the muscle fatigue will be taken into account. Some experiments carried out using a myoelectric hand simulator show the effectiveness of the proposed motion discrimination method.
Data Fusion Based on Optical Technology for Observation of Human Manipulation
NASA Astrophysics Data System (ADS)
Falco, Pietro; De Maria, Giuseppe; Natale, Ciro; Pirozzi, Salvatore
2012-01-01
The adoption of human observation is becoming more and more frequent within imitation learning and programming by demonstration approaches (PbD) to robot programming. For robotic systems equipped with anthropomorphic hands, the observation phase is very challenging and no ultimate solution exists. This work proposes a novel mechatronic approach to the observation of human hand motion during manipulation tasks. The strategy is based on the combined use of an optical motion capture system and a low-cost data glove equipped with novel joint angle sensors, based on optoelectronic technology. The combination of the two information sources is obtained through a sensor fusion algorithm based on the extended Kalman filter (EKF) suitably modified to tackle the problem of marker occlusions, typical of optical motion capture systems. This approach requires a kinematic model of the human hand. Another key contribution of this work is a new method to calibrate this model.
A method of depth image based human action recognition
NASA Astrophysics Data System (ADS)
Li, Pei; Cheng, Wanli
2017-05-01
In this paper, we propose an action recognition algorithm framework based on human skeleton joint information. In order to extract the feature of human motion, we use the information of body posture, speed and acceleration of movement to construct spatial motion feature that can describe and reflect the joint. On the other hand, we use the classical temporal pyramid matching algorithm to construct temporal feature and describe the motion sequence variation from different time scales. Then, we use bag of words to represent these actions, which is to present every action in the histogram by clustering these extracted feature. Finally, we employ Hidden Markov Model to train and test the extracted motion features. In the experimental part, the correctness and effectiveness of the proposed model are comprehensively verified on two well-known datasets.
Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.
Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen
2017-11-23
Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.
Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach
NASA Technical Reports Server (NTRS)
Henderson, Steve
2005-01-01
Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
Efficiencies for parts and wholes in biological-motion perception.
Bromfield, W Drew; Gold, Jason M
2017-10-01
People can reliably infer the actions, intentions, and mental states of fellow humans from body movements (Blake & Shiffrar, 2007). Previous research on such biological-motion perception has suggested that the movements of the feet may play a particularly important role in making certain judgments about locomotion (Chang & Troje, 2009; Troje & Westhoff, 2006). One account of this effect is that the human visual system may have evolved specialized processes that are efficient for extracting information carried by the feet (Troje & Westhoff, 2006). Alternatively, the motion of the feet may simply be more discriminable than that of other parts of the body. To dissociate these two possibilities, we measured people's ability to discriminate the walking direction of stimuli in which individual body parts (feet, hands) were removed or shown in isolation. We then compared human performance to that of a statistically optimal observer (Gold, Tadin, Cook, & Blake, 2008), giving us a measure of humans' discriminative ability independent of the information available (a quantity known as efficiency). We found that efficiency was highest when the hands and the feet were shown in isolation. A series of follow-up experiments suggested that observers were relying on a form-based cue with the isolated hands (specifically, the orientation of their path through space) and a motion-based cue with the isolated feet to achieve such high efficiencies. We relate our findings to previous proposals of a distinction between form-based and motion-based mechanisms in biological-motion perception.
Indexing and retrieving motions of characters in close contact.
Ho, Edmond S L; Komura, Taku
2009-01-01
Human motion indexing and retrieval are important for animators due to the need to search for motions in the database which can be blended and concatenated. Most of the previous researches of human motion indexing and retrieval compute the Euclidean distance of joint angles or joint positions. Such approaches are difficult to apply for cases in which multiple characters are closely interacting with each other, as the relationships of the characters are not encoded in the representation. In this research, we propose a topology-based approach to index the motions of two human characters in close contact. We compute and encode how the two bodies are tangled based on the concept of rational tangles. The encoded relationships, which we define as TangleList, are used to determine the similarity of the pairs of postures. Using our method, we can index and retrieve motions such as one person piggy-backing another, one person assisting another in walking, and two persons dancing / wrestling. Our method is useful to manage a motion database of multiple characters. We can also produce motion graph structures of two characters closely interacting with each other by interpolating and concatenating topologically similar postures and motion clips, which are applicable to 3D computer games and computer animation.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Head Motion Modeling for Human Behavior Analysis in Dyadic Interaction
Xiao, Bo; Georgiou, Panayiotis; Baucom, Brian; Narayanan, Shrikanth S.
2015-01-01
This paper presents a computational study of head motion in human interaction, notably of its role in conveying interlocutors’ behavioral characteristics. Head motion is physically complex and carries rich information; current modeling approaches based on visual signals, however, are still limited in their ability to adequately capture these important properties. Guided by the methodology of kinesics, we propose a data driven approach to identify typical head motion patterns. The approach follows the steps of first segmenting motion events, then parametrically representing the motion by linear predictive features, and finally generalizing the motion types using Gaussian mixture models. The proposed approach is experimentally validated using video recordings of communication sessions from real couples involved in a couples therapy study. In particular we use the head motion model to classify binarized expert judgments of the interactants’ specific behavioral characteristics where entrainment in head motion is hypothesized to play a role: Acceptance, Blame, Positive, and Negative behavior. We achieve accuracies in the range of 60% to 70% for the various experimental settings and conditions. In addition, we describe a measure of motion similarity between the interaction partners based on the proposed model. We show that the relative change of head motion similarity during the interaction significantly correlates with the expert judgments of the interactants’ behavioral characteristics. These findings demonstrate the effectiveness of the proposed head motion model, and underscore the promise of analyzing human behavioral characteristics through signal processing methods. PMID:26557047
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
Brain-machine interfacing control of whole-body humanoid motion
Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun
2014-01-01
We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134
Model-Based Reinforcement of Kinect Depth Data for Human Motion Capture Applications
Calderita, Luis Vicente; Bandera, Juan Pedro; Bustos, Pablo; Skiadopoulos, Andreas
2013-01-01
Motion capture systems have recently experienced a strong evolution. New cheap depth sensors and open source frameworks, such as OpenNI, allow for perceiving human motion on-line without using invasive systems. However, these proposals do not evaluate the validity of the obtained poses. This paper addresses this issue using a model-based pose generator to complement the OpenNI human tracker. The proposed system enforces kinematics constraints, eliminates odd poses and filters sensor noise, while learning the real dimensions of the performer's body. The system is composed by a PrimeSense sensor, an OpenNI tracker and a kinematics-based filter and has been extensively tested. Experiments show that the proposed system improves pure OpenNI results at a very low computational cost. PMID:23845933
How long did it last? You would better ask a human
Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694
How long did it last? You would better ask a human.
Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-02-06
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.
Norman, Joseph; Hock, Howard; Schöner, Gregor
2014-07-01
It has long been thought (e.g., Cavanagh & Mather, 1989) that first-order motion-energy extraction via space-time comparator-type models (e.g., the elaborated Reichardt detector) is sufficient to account for human performance in the short-range motion paradigm (Braddick, 1974), including the perception of reverse-phi motion when the luminance polarity of the visual elements is inverted during successive frames. Human observers' ability to discriminate motion direction and use coherent motion information to segregate a region of a random cinematogram and determine its shape was tested; they performed better in the same-, as compared with the inverted-, polarity condition. Computational analyses of short-range motion perception based on the elaborated Reichardt motion energy detector (van Santen & Sperling, 1985) predict, incorrectly, that symmetrical results will be obtained for the same- and inverted-polarity conditions. In contrast, the counterchange detector (Hock, Schöner, & Gilroy, 2009) predicts an asymmetry quite similar to that of human observers in both motion direction and shape discrimination. The further advantage of counterchange, as compared with motion energy, detection for the perception of spatial shape- and depth-from-motion is discussed.
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
Motion cue effects on human pilot dynamics in manual control
NASA Technical Reports Server (NTRS)
Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.
1977-01-01
Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.
Two-character motion analysis and synthesis.
Kwon, Taesoo; Cho, Young-Sang; Park, Sang Il; Shin, Sung Yong
2008-01-01
In this paper, we deal with the problem of synthesizing novel motions of standing-up martial arts such as Kickboxing, Karate, and Taekwondo performed by a pair of human-like characters while reflecting their interactions. Adopting an example-based paradigm, we address three non-trivial issues embedded in this problem: motion modeling, interaction modeling, and motion synthesis. For the first issue, we present a semi-automatic motion labeling scheme based on force-based motion segmentation and learning-based action classification. We also construct a pair of motion transition graphs each of which represents an individual motion stream. For the second issue, we propose a scheme for capturing the interactions between two players. A dynamic Bayesian network is adopted to build a motion transition model on top of the coupled motion transition graph that is constructed from an example motion stream. For the last issue, we provide a scheme for synthesizing a novel sequence of coupled motions, guided by the motion transition model. Although the focus of the present work is on martial arts, we believe that the framework of the proposed approach can be conveyed to other two-player motions as well.
Postural control during quiet bipedal standing in rats
Sato, Yota; Fujiki, Soichiro; Sato, Yamato; Aoi, Shinya; Tsuchiya, Kazuo; Yanagihara, Dai
2017-01-01
The control of bipedal posture in humans is subject to non-ideal conditions such as delayed sensation and heartbeat noise. However, the controller achieves a high level of functionality by utilizing body dynamics dexterously. In order to elucidate the neural mechanism responsible for postural control, the present study made use of an experimental setup involving rats because they have more accessible neural structures. The experimental design requires rats to stand bipedally in order to obtain a water reward placed in a water supplier above them. Their motions can be measured in detail using a motion capture system and a force plate. Rats have the ability to stand bipedally for long durations (over 200 s), allowing for the construction of an experimental environment in which the steady standing motion of rats could be measured. The characteristics of the measured motion were evaluated based on aspects of the rats’ intersegmental coordination and power spectrum density (PSD). These characteristics were compared with those of the human bipedal posture. The intersegmental coordination of the standing rats included two components that were similar to that of standing humans: center of mass and trunk motion. The rats’ PSD showed a peak at approximately 1.8 Hz and the pattern of the PSD under the peak frequency was similar to that of the human PSD. However, the frequencies were five times higher in rats than in humans. Based on the analysis of the rats’ bipedal standing motion, there were some common characteristics between rat and human standing motions. Thus, using standing rats is expected to be a powerful tool to reveal the neural basis of postural control. PMID:29244818
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
Generating Concise Rules for Human Motion Retrieval
NASA Astrophysics Data System (ADS)
Mukai, Tomohiko; Wakisaka, Ken-Ichi; Kuriyama, Shigeru
This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.
Tanaka, Yuji; Hase, Eiji; Fukushima, Shuichiro; Ogura, Yuki; Yamashita, Toyonobu; Hirao, Tetsuji; Araki, Tsutomu; Yasui, Takeshi
2014-01-01
Polarization-resolved second-harmonic-generation (PR-SHG) microscopy is a powerful tool for investigating collagen fiber orientation quantitatively with low invasiveness. However, the waiting time for the mechanical polarization rotation makes it too sensitive to motion artifacts and hence has hampered its use in various applications in vivo. In the work described in this article, we constructed a motion-artifact-robust, PR-SHG microscope based on rapid polarization switching at every pixel with an electro-optic Pockells cell (PC) in synchronization with step-wise raster scanning of the focus spot and alternate data acquisition of a vertical-polarization-resolved SHG signal and a horizontal-polarization-resolved one. The constructed PC-based PR-SHG microscope enabled us to visualize orientation mapping of dermal collagen fiber in human facial skin in vivo without the influence of motion artifacts. Furthermore, it implied the location and/or age dependence of the collagen fiber orientation in human facial skin. The robustness to motion artifacts in the collagen orientation measurement will expand the application scope of SHG microscopy in dermatology and collagen-related fields. PMID:24761292
High-resolution motion-compensated imaging photoplethysmography for remote heart rate monitoring
NASA Astrophysics Data System (ADS)
Chung, Audrey; Wang, Xiao Yu; Amelard, Robert; Scharfenberger, Christian; Leong, Joanne; Kulinski, Jan; Wong, Alexander; Clausi, David A.
2015-03-01
We present a novel non-contact photoplethysmographic (PPG) imaging system based on high-resolution video recordings of ambient reflectance of human bodies that compensates for body motion and takes advantage of skin erythema fluctuations to improve measurement reliability for the purpose of remote heart rate monitoring. A single measurement location for recording the ambient reflectance is automatically identified on an individual, and the motion for the location is determined over time via measurement location tracking. Based on the determined motion information motion-compensated reflectance measurements at different wavelengths for the measurement location can be acquired, thus providing more reliable measurements for the same location on the human over time. The reflectance measurement is used to determine skin erythema fluctuations over time, resulting in the capture of a PPG signal with a high signal-to-noise ratio. To test the efficacy of the proposed system, a set of experiments involving human motion in a front-facing position were performed under natural ambient light. The experimental results demonstrated that skin erythema fluctuations can achieve noticeably improved average accuracy in heart rate measurement when compared to previously proposed non-contact PPG imaging systems.
EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.
Yin, Yue H; Fan, Yuan J; Xu, Li D
2012-07-01
Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.
Human Classification Based on Gestural Motions by Using Components of PCA
NASA Astrophysics Data System (ADS)
Aziz, Azri A.; Wan, Khairunizam; Za'aba, S. K.; B, Shahriman A.; Adnan, Nazrul H.; H, Asyekin; R, Zuradzman M.
2013-12-01
Lately, a study of human capabilities with the aim to be integrated into machine is the famous topic to be discussed. Moreover, human are bless with special abilities that they can hear, see, sense, speak, think and understand each other. Giving such abilities to machine for improvement of human life is researcher's aim for better quality of life in the future. This research was concentrating on human gesture, specifically arm motions for differencing the individuality which lead to the development of the hand gesture database. We try to differentiate the human physical characteristic based on hand gesture represented by arm trajectories. Subjects are selected from different type of the body sizes, and then acquired data undergo resampling process. The results discuss the classification of human based on arm trajectories by using Principle Component Analysis (PCA).
Motion capture based identification of the human body inertial parameters.
Venture, Gentiane; Ayusawa, Ko; Nakamura, Yoshihiko
2008-01-01
Identification of body inertia, masses and center of mass is an important data to simulate, monitor and understand dynamics of motion, to personalize rehabilitation programs. This paper proposes an original method to identify the inertial parameters of the human body, making use of motion capture data and contact forces measurements. It allows in-vivo painless estimation and monitoring of the inertial parameters. The method is described and then obtained experimental results are presented and discussed.
Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis
Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan
2015-01-01
Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761
NASA Astrophysics Data System (ADS)
Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo
2016-10-01
Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.
Human Factors Vehicle Displacement Analysis: Engineering In Motion
NASA Technical Reports Server (NTRS)
Atencio, Laura Ashley; Reynolds, David; Robertson, Clay
2010-01-01
While positioned on the launch pad at the Kennedy Space Center, tall stacked launch vehicles are exposed to the natural environment. Varying directional winds and vortex shedding causes the vehicle to sway in an oscillating motion. The Human Factors team recognizes that vehicle sway may hinder ground crew operation, impact the ground system designs, and ultimately affect launch availability . The objective of this study is to physically simulate predicted oscillation envelopes identified by analysis. and conduct a Human Factors Analysis to assess the ability to carry out essential Upper Stage (US) ground operator tasks based on predicted vehicle motion.
Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions
NASA Astrophysics Data System (ADS)
Khoury, Mehdi; Liu, Honghai
This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.
Kinematics and Dynamics of Motion Control Based on Acceleration Control
NASA Astrophysics Data System (ADS)
Ohishi, Kiyoshi; Ohba, Yuzuru; Katsura, Seiichiro
The first IEEE International Workshop on Advanced Motion Control was held in 1990 pointed out the importance of physical interpretation of motion control. The software servoing technology is now common in machine tools, robotics, and mechatronics. It has been intensively developed for the numerical control (NC) machines. Recently, motion control in unknown environment will be more and more important. Conventional motion control is not always suitable due to the lack of adaptive capability to the environment. A more sophisticated ability in motion control is necessary for compliant contact with environment. Acceleration control is the key technology of motion control in unknown environment. The acceleration control can make a motion system to be a zero control stiffness system without losing the robustness. Furthermore, a realization of multi-degree-of-freedom motion is necessary for future human assistance. A human assistant motion will require various control stiffness corresponding to the task. The review paper focuses on the modal coordinate system to integrate the various control stiffness in the virtual axes. A bilateral teleoperation is a good candidate to consider the future human assistant motion and integration of decentralized systems. Thus the paper reviews and discusses the bilateral teleoperation from the control stiffness and the modal control design points of view.
Miyajima, Saori; Tanaka, Takayuki; Imamura, Yumeko; Kusaka, Takashi
2015-01-01
We estimate lumbar torque based on motion measurement using only three inertial sensors. First, human motion is measured by a 6-axis motion tracking device that combines a 3-axis accelerometer and a 3-axis gyroscope placed on the shank, thigh, and back. Next, the lumbar joint torque during the motion is estimated by kinematic musculoskeletal simulation. The conventional method for estimating joint torque uses full body motion data measured by an optical motion capture system. However, in this research, joint torque is estimated by using only three link angles of the body, thigh, and shank. The utility of our method was verified by experiments. We measured motion of bendung knee and waist simultaneously. As the result, we were able to estimate the lumbar joint torque from measured motion.
NASA Astrophysics Data System (ADS)
Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz
2010-09-01
This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.
Metrics for Performance Evaluation of Patient Exercises during Physical Therapy.
Vakanski, Aleksandar; Ferguson, Jake M; Lee, Stephen
2017-06-01
The article proposes a set of metrics for evaluation of patient performance in physical therapy exercises. Taxonomy is employed that classifies the metrics into quantitative and qualitative categories, based on the level of abstraction of the captured motion sequences. Further, the quantitative metrics are classified into model-less and model-based metrics, in reference to whether the evaluation employs the raw measurements of patient performed motions, or whether the evaluation is based on a mathematical model of the motions. The reviewed metrics include root-mean square distance, Kullback Leibler divergence, log-likelihood, heuristic consistency, Fugl-Meyer Assessment, and similar. The metrics are evaluated for a set of five human motions captured with a Kinect sensor. The metrics can potentially be integrated into a system that employs machine learning for modelling and assessment of the consistency of patient performance in home-based therapy setting. Automated performance evaluation can overcome the inherent subjectivity in human performed therapy assessment, and it can increase the adherence to prescribed therapy plans, and reduce healthcare costs.
A New Approach for Human Forearm Motion Assist by Actuated Artificial Joint-An Inner Skeleton Robot
NASA Astrophysics Data System (ADS)
Kundu, Subrata Kumar; Kiguchi, Kazuo; Teramoto, Kenbu
In order to help the physical activities of the elderly or physically disabled persons, we propose a new concept of a power-assist inner skeleton robot (i.e., actuated artificial joint) that is supposed to assist the human daily life motion from inside of the human body. This paper presents an implantable 2 degree of freedom (DOF) inner skeleton robot that is designed to assist human elbow flexion-extension motion and forearm supination-pronation motion for daily life activities. We have developed a prototype of the inner skeleton robot that is supposed to assist the motion from inside of the body and act as an actuated artificial joint. The proposed system is controlled based on the activation patterns of the electromyogram (EMG) signals of the user's muscles by applying fuzzy-neuro control method. A joint actuator with angular position sensor is designed for the inner skeleton robot and a T-Mechanism is proposed to keep the bone arrangement similar to the normal human articulation after the elbow arthroplasty. The effectiveness of the proposed system has been evaluated by experiment.
Physiological and subjective evaluation of a human-robot object hand-over task.
Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël
2011-11-01
In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.
MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner.
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B; Michel, Christian J; El Fakhri, Georges; Schmand, Matthias; Sorensen, A Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MRI data can be used for motion tracking. In this work, a novel algorithm for data processing and rigid-body motion correction (MC) for the MRI-compatible BrainPET prototype scanner is described, and proof-of-principle phantom and human studies are presented. To account for motion, the PET prompt and random coincidences and sensitivity data for postnormalization were processed in the line-of-response (LOR) space according to the MRI-derived motion estimates. The processing time on the standard BrainPET workstation is approximately 16 s for each motion estimate. After rebinning in the sinogram space, the motion corrected data were summed, and the PET volume was reconstructed using the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed, and motion estimates were obtained using 2 high-temporal-resolution MRI-based motion-tracking techniques. After accounting for the misalignment between the 2 scanners, perfectly coregistered MRI and PET volumes were reproducibly obtained. The MRI output gates inserted into the PET list-mode allow the temporal correlation of the 2 datasets within 0.2 ms. The Hoffman phantom volume reconstructed by processing the PET data in the LOR space was similar to the one obtained by processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the procedure. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 s and 20 ms, respectively. Motion-deblurred PET images, with excellent delineation of specific brain structures, were obtained using these 2 MRI-based estimates. An MRI-based MC algorithm was implemented for an integrated MR-PET scanner. High-temporal-resolution MRI-derived motion estimates (obtained while simultaneously acquiring anatomic or functional MRI data) can be used for PET MC. An MRI-based MC method has the potential to improve PET image quality, increasing its reliability, reproducibility, and quantitative accuracy, and to benefit many neurologic applications.
A triboelectric motion sensor in wearable body sensor network for human activity recognition.
Hui Huang; Xian Li; Ye Sun
2016-08-01
The goal of this study is to design a novel triboelectric motion sensor in wearable body sensor network for human activity recognition. Physical activity recognition is widely used in well-being management, medical diagnosis and rehabilitation. Other than traditional accelerometers, we design a novel wearable sensor system based on triboelectrification. The triboelectric motion sensor can be easily attached to human body and collect motion signals caused by physical activities. The experiments are conducted to collect five common activity data: sitting and standing, walking, climbing upstairs, downstairs, and running. The k-Nearest Neighbor (kNN) clustering algorithm is adopted to recognize these activities and validate the feasibility of this new approach. The results show that our system can perform physical activity recognition with a successful rate over 80% for walking, sitting and standing. The triboelectric structure can also be used as an energy harvester for motion harvesting due to its high output voltage in random low-frequency motion.
A low cost wearable optical-based goniometer for human joint monitoring
NASA Astrophysics Data System (ADS)
Lim, Chee Kian; Luo, Zhiqiang; Chen, I.-Ming; Yeo, Song Huat
2011-03-01
Widely used in the fields of physical and occupational therapy, goniometers are indispensible when it comes to angular measurement of the human joint. In both fields, there is a need to measure the range of motion associated with various joints and muscle groups. For example, a goniometer may be used to help determine the current status of the range of motion in bend the arm at the elbow, bending the knee, or bending at the waist. The device can help to establish the range of motion at the beginning of the treatment series, and also allow the therapist to monitor progress during subsequent sessions. Most commonly found are the mechanical goniometers which are inexpensive but bulky. As the parts are mechanically linked, accuracy and resolution are largely limited. On the other hand, electronic and optical fiberbased goniometers promise better performance over its mechanical counterpart but due to higher cost and setup requirements does not make it an attractive proposition as well. In this paper, we present a reliable and non-intrusive design of an optical-based goniometer for human joint measurement. This device will allow continuous and longterm monitoring of human joint motion in everyday setting. The proposed device was benchmarked against mechanical goniometer and optical based motion capture system to validate its performance. From the empirical results, it has been proven that this design can be use as a robust and effective wearable joint monitoring device.
A marker-free system for the analysis of movement disabilities.
Legrand, L; Marzani, F; Dusserre, L
1998-01-01
A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.
Cell motion predicts human epidermal stemness
Toki, Fujio; Tate, Sota; Imai, Matome; Matsushita, Natsuki; Shiraishi, Ken; Sayama, Koji; Toki, Hiroshi; Higashiyama, Shigeki
2015-01-01
Image-based identification of cultured stem cells and noninvasive evaluation of their proliferative capacity advance cell therapy and stem cell research. Here we demonstrate that human keratinocyte stem cells can be identified in situ by analyzing cell motion during their cultivation. Modeling experiments suggested that the clonal type of cultured human clonogenic keratinocytes can be efficiently determined by analysis of early cell movement. Image analysis experiments demonstrated that keratinocyte stem cells indeed display a unique rotational movement that can be identified as early as the two-cell stage colony. We also demonstrate that α6 integrin is required for both rotational and collective cell motion. Our experiments provide, for the first time, strong evidence that cell motion and epidermal stemness are linked. We conclude that early identification of human keratinocyte stem cells by image analysis of cell movement is a valid parameter for quality control of cultured keratinocytes for transplantation. PMID:25897083
Exploitation of Ubiquitous Wi-Fi Devices as Building Blocks for Improvised Motion Detection Systems.
Soldovieri, Francesco; Gennarelli, Gianluca
2016-02-27
This article deals with a feasibility study on the detection of human movements in indoor scenarios based on radio signal strength variations. The sensing principle exploits the fact that the human body interacts with wireless signals, introducing variations of the radiowave fields due to shadowing and multipath phenomena. As a result, human motion can be inferred from fluctuations of radiowave power collected by a receiving terminal. In this paper, we investigate the potentialities of widely available wireless communication devices in order to develop an improvised motion detection system (IMDS). Experimental tests are performed in an indoor environment by using a smartphone as a Wi-Fi access point and a laptop with dedicated software as a receiver. Simple detection strategies tailored for real-time operation are implemented to process the received signal strength measurements. The achieved results confirm the potentialities of the simple system here proposed to reliably detect human motion in operational conditions.
Large-eddy simulation of human-induced contaminant transport in room compartments.
Choi, J-I; Edwards, J R
2012-02-01
A large-eddy simulation is used to investigate contaminant transport owing to complex human and door motions and vent-system activity in room compartments where a contaminated and clean room are connected by a vestibule. Human and door motions are simulated with an immersed boundary procedure. We demonstrate the details of contaminant transport owing to human- and door-motion-induced wake development during a short-duration event involving the movement of a person (or persons) from a contaminated room, through a vestibule, into a clean room. Parametric studies that capture the effects of human walking pattern, door operation, over-pressure level, and vestibule size are systematically conducted. A faster walking speed results in less mass transport from the contaminated room into the clean room. The net effect of increasing the volume of the vestibule is to reduce the contaminant transport. The results show that swinging-door motion is the dominant transport mechanism and that human-induced wake motion enhances compartment-to-compartment transport. The effect of human activity on contaminant transport may be important in design and operation of clean or isolation rooms in chemical or pharmaceutical industries and intensive care units for airborne infectious disease control in a hospital. The present simulations demonstrate details of contaminant transport in such indoor environments during human motion events and show that simulation-based sensitivity analysis can be utilized for the diagnosis of contaminant infiltration and for better environmental protection. © 2011 John Wiley & Sons A/S.
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Perception of animacy in dogs and humans.
Abdai, Judit; Ferdinandy, Bence; Terencio, Cristina Baño; Pogány, Ákos; Miklósi, Ádám
2017-06-01
Humans have a tendency to perceive inanimate objects as animate based on simple motion cues. Although animacy is considered as a complex cognitive property, this recognition seems to be spontaneous. Researchers have found that young human infants discriminate between dependent and independent movement patterns. However, quick visual perception of animate entities may be crucial to non-human species as well. Based on general mammalian homology, dogs may possess similar skills to humans. Here, we investigated whether dogs and humans discriminate similarly between dependent and independent motion patterns performed by geometric shapes. We projected a side-by-side video display of the two patterns and measured looking times towards each side, in two trials. We found that in Trial 1, both dogs and humans were equally interested in the two patterns, but in Trial 2 of both species, looking times towards the dependent pattern decreased, whereas they increased towards the independent pattern. We argue that dogs and humans spontaneously recognized the specific pattern and habituated to it rapidly, but continued to show interest in the 'puzzling' pattern. This suggests that both species tend to recognize inanimate agents as animate relying solely on their motions. © 2017 The Author(s).
Ocular tracking responses to background motion gated by feature-based attention.
Souto, David; Kerzel, Dirk
2014-09-01
Involuntary ocular tracking responses to background motion offer a window on the dynamics of motion computations. In contrast to spatial attention, we know little about the role of feature-based attention in determining this ocular response. To probe feature-based effects of background motion on involuntary eye movements, we presented human observers with a balanced background perturbation. Two clouds of dots moved in opposite vertical directions while observers tracked a target moving in horizontal direction. Additionally, they had to discriminate a change in the direction of motion (±10° from vertical) of one of the clouds. A vertical ocular following response occurred in response to the motion of the attended cloud. When motion selection was based on motion direction and color of the dots, the peak velocity of the tracking response was 30% of the tracking response elicited in a single task with only one direction of background motion. In two other experiments, we tested the effect of the perturbation when motion selection was based on color, by having motion direction vary unpredictably, or on motion direction alone. Although the gain of pursuit in the horizontal direction was significantly reduced in all experiments, indicating a trade-off between perceptual and oculomotor tasks, ocular responses to perturbations were only observed when selection was based on both motion direction and color. It appears that selection by motion direction can only be effective for driving ocular tracking when the relevant elements can be segregated before motion onset. Copyright © 2014 the American Physiological Society.
Kang, Yue; Wang, Bo; Dai, Shuge; Liu, Guanlin; Pu, Yanping; Hu, Chenguo
2015-09-16
A folded elastic strip-based triboelectric nanogenerator (FS-TENG) made from two folded double-layer elastic strips of Al/PET and PTFE/PET can achieve multiple functions by low frequency mechanical motion. A single FS-TENG with strip width of 3 cm and length of 27 cm can generate a maximum output current, open-circuit voltage, and peak power of 55 μA, 840 V, and 7.33 mW at deformation frequency of 4 Hz with amplitude of 2.5 cm, respectively. This FS-TENG can work as a weight sensor due to its good elasticity. An integrated generator assembled by four FS-TENGs (IFS-TENG) can harvest the energy of human motion like flapping hands and walking steps. In addition, the IFS-TENG combined with electromagnetically induced electricity can achieve a completely self-driven doorbell with flashing lights. Moreover, a box-like generator integrated by four IFS-TENGs inside can work in horizontal or random motion modes and can be improved to harvest energy in all directions. This work promotes the research of completely self-driven systems and energy harvesting of human motion for applications in our daily life.
Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection
NASA Astrophysics Data System (ADS)
Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing
2016-04-01
Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local means and local magnitudes that facilitate a more natural decomposition than that using the cubic spline approach of EMD. In this paper, we apply the UWB radar system in through-wall human detections and present a method to characterize human's motions. We start with a walker's motion model and periodic motion features are given the analysis of the experimental data based on the combination of the LMT and fast Fourier Transform (FFT). The characteristics of human's motions including respiration, swing arms and legs, and fluctuations of the torso are extracted. At last, we calculate the actual distance between the human and the wall. This work was supported in part by National Natural Science Foundation of China under Grant 41574109 and 41430322.
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
Wei, Xiang; Camino, Acner; Pi, Shaohua; Cepurna, William; Huang, David; Morrison, John C; Jia, Yali
2018-05-01
Phase-based optical coherence tomography (OCT), such as OCT angiography (OCTA) and Doppler OCT, is sensitive to the confounding phase shift introduced by subject bulk motion. Traditional bulk motion compensation methods are limited by their accuracy and computing cost-effectiveness. In this Letter, to the best of our knowledge, we present a novel bulk motion compensation method for phase-based functional OCT. Bulk motion associated phase shift can be directly derived by solving its equation using a standard deviation of phase-based OCTA and Doppler OCT flow signals. This method was evaluated on rodent retinal images acquired by a prototype visible light OCT and human retinal images acquired by a commercial system. The image quality and computational speed were significantly improved, compared to two conventional phase compensation methods.
Linearized motion estimation for articulated planes.
Datta, Ankur; Sheikh, Yaser; Kanade, Takeo
2011-04-01
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.
Akdogan, Erhan; Shima, Keisuke; Kataoka, Hitoshi; Hasegawa, Masaki; Otsuka, Akira; Tsuji, Toshio
2012-09-01
This paper proposes the cybernetic rehabilitation aid (CRA) based on the concept of direct teaching using tactile feedback with electromyography (EMG)-based motor skill evaluation. Evaluation and teaching of motor skills are two important aspects of rehabilitation training, and the CRA provides novel and effective solutions to potentially solve the difficulties inherent in these two processes within a single system. In order to evaluate motor skills, EMG signals measured from a patient are analyzed using a log-linearized Gaussian mixture network that can classify motion patterns and compute the degree of similarity between the patient's measured EMG patterns and the desired pattern provided by the therapist. Tactile stimulators are used to convey motion instructions from the therapist or the system to the patient, and a rehabilitation robot can also be integrated into the developed prototype to increase its rehabilitation capacity. A series of experiments performed using the developed prototype demonstrated that the CRA can work as a human-human, human-computer and human-machine system. The experimental results indicated that the healthy (able-bodied) subjects were able to follow the desired muscular contraction levels instructed by the therapist or the system and perform proper joint motion without relying on visual feedback.
Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.
Seymour, Kiley J; Clifford, Colin W G
2012-05-01
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Yu, Xiao-Guang; Li, Yuan-Qing; Zhu, Wei-Bin; Huang, Pei; Wang, Tong-Tong; Hu, Ning; Fu, Shao-Yun
2017-05-25
Melamine sponge, also known as nano-sponge, is widely used as an abrasive cleaner in our daily life. In this work, the fabrication of a wearable strain sensor for human motion detection is first demonstrated with a commercially available nano-sponge as a starting material. The key resistance sensitive material in the wearable strain sensor is obtained by the encapsulation of a carbonized nano-sponge (CNS) with silicone resin. The as-fabricated CNS/silicone sensor is highly sensitive to strain with a maximum gauge factor of 18.42. In addition, the CNS/silicone sensor exhibits a fast and reliable response to various cyclic loading within a strain range of 0-15% and a loading frequency range of 0.01-1 Hz. Finally, the CNS/silicone sensor as a wearable device for human motion detection including joint motion, eye blinking, blood pulse and breathing is demonstrated by attaching the sensor to the corresponding parts of the human body. In consideration of the simple fabrication technique, low material cost and excellent strain sensing performance, the CNS/silicone sensor is believed to have great potential in the next-generation of wearable devices for human motion detection.
Use of cues in virtual reality depends on visual feedback.
Fulvio, Jacqueline M; Rokers, Bas
2017-11-22
3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.
Low-cost human motion capture system for postural analysis onboard ships
NASA Astrophysics Data System (ADS)
Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore
2011-07-01
The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.
Design and implementation of self-balancing coaxial two wheel robot based on HSIC
NASA Astrophysics Data System (ADS)
Hu, Tianlian; Zhang, Hua; Dai, Xin; Xia, Xianfeng; Liu, Ran; Qiu, Bo
2007-12-01
This thesis has studied the control problem concerning position and orientation control of self-balancing coaxial two wheel robot based on the human simulated intelligent control (HSIC) theory. Adopting Lagrange equation, the dynamic model of self-balancing coaxial two-wheel Robot is built up, and the Sensory-motor Intelligent Schemas (SMIS) of HSIC controller for the robot is designed by analyzing its movement and simulating the human controller. In robot's motion process, by perceiving position and orientation of the robot and using multi-mode control strategy based on characteristic identification, the HSIC controller enables the robot to control posture. Utilizing Matlab/Simulink, a simulation platform is established and a motion controller is designed and realized based on RT-Linux real-time operating system, employing high speed ARM9 processor S3C2440 as kernel of the motion controller. The effectiveness of the new design is testified by the experiment.
MR-assisted PET Motion Correction for eurological Studies in an Integrated MR-PET Scanner
Catana, Ciprian; Benner, Thomas; van der Kouwe, Andre; Byars, Larry; Hamm, Michael; Chonde, Daniel B.; Michel, Christian J.; El Fakhri, Georges; Schmand, Matthias; Sorensen, A. Gregory
2011-01-01
Head motion is difficult to avoid in long PET studies, degrading the image quality and offsetting the benefit of using a high-resolution scanner. As a potential solution in an integrated MR-PET scanner, the simultaneously acquired MR data can be used for motion tracking. In this work, a novel data processing and rigid-body motion correction (MC) algorithm for the MR-compatible BrainPET prototype scanner is described and proof-of-principle phantom and human studies are presented. Methods To account for motion, the PET prompts and randoms coincidences as well as the sensitivity data are processed in the line or response (LOR) space according to the MR-derived motion estimates. After sinogram space rebinning, the corrected data are summed and the motion corrected PET volume is reconstructed from these sinograms and the attenuation and scatter sinograms in the reference position. The accuracy of the MC algorithm was first tested using a Hoffman phantom. Next, human volunteer studies were performed and motion estimates were obtained using two high temporal resolution MR-based motion tracking techniques. Results After accounting for the physical mismatch between the two scanners, perfectly co-registered MR and PET volumes are reproducibly obtained. The MR output gates inserted in to the PET list-mode allow the temporal correlation of the two data sets within 0.2 s. The Hoffman phantom volume reconstructed processing the PET data in the LOR space was similar to the one obtained processing the data using the standard methods and applying the MC in the image space, demonstrating the quantitative accuracy of the novel MC algorithm. In human volunteer studies, motion estimates were obtained from echo planar imaging and cloverleaf navigator sequences every 3 seconds and 20 ms, respectively. Substantially improved PET images with excellent delineation of specific brain structures were obtained after applying the MC using these MR-based estimates. Conclusion A novel MR-based MC algorithm was developed for the integrated MR-PET scanner. High temporal resolution MR-derived motion estimates (obtained while simultaneously acquiring anatomical or functional MR data) can be used for PET MC. An MR-based MC has the potential to improve PET as a quantitative method, increasing its reliability and reproducibility which could benefit a large number of neurological applications. PMID:21189415
Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok
2015-03-25
Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.
Fiber-based generator for wearable electronics and mobile medication.
Zhong, Junwen; Zhang, Yan; Zhong, Qize; Hu, Qiyi; Hu, Bin; Wang, Zhong Lin; Zhou, Jun
2014-06-24
Smart garments for monitoring physiological and biomechanical signals of the human body are key sensors for personalized healthcare. However, they typically require bulky battery packs or have to be plugged into an electric plug in order to operate. Thus, a smart shirt that can extract energy from human body motions to run body-worn healthcare sensors is particularly desirable. Here, we demonstrated a metal-free fiber-based generator (FBG) via a simple, cost-effective method by using commodity cotton threads, a polytetrafluoroethylene aqueous suspension, and carbon nanotubes as source materials. The FBGs can convert biomechanical motions/vibration energy into electricity utilizing the electrostatic effect with an average output power density of ∼0.1 μW/cm(2) and have been identified as an effective building element for a power shirt to trigger a wireless body temperature sensor system. Furthermore, the FBG was demonstrated as a self-powered active sensor to quantitatively detect human motion.
Laban movement analysis to classify emotions from motion
NASA Astrophysics Data System (ADS)
Dewan, Swati; Agarwal, Shubham; Singh, Navjyoti
2018-04-01
In this paper, we present the study of Laban Movement Analysis (LMA) to understand basic human emotions from nonverbal human behaviors. While there are a lot of studies on understanding behavioral patterns based on natural language processing and speech processing applications, understanding emotions or behavior from non-verbal human motion is still a very challenging and unexplored field. LMA provides a rich overview of the scope of movement possibilities. These basic elements can be used for generating movement or for describing movement. They provide an inroad to understanding movement and for developing movement efficiency and expressiveness. Each human being combines these movement factors in his/her own unique way and organizes them to create phrases and relationships which reveal personal, artistic, or cultural style. In this work, we build a motion descriptor based on a deep understanding of Laban theory. The proposed descriptor builds up on previous works and encodes experiential features by using temporal windows. We present a more conceptually elaborate formulation of Laban theory and test it in a relatively new domain of behavioral research with applications in human-machine interaction. The recognition of affective human communication may be used to provide developers with a rich source of information for creating systems that are capable of interacting well with humans. We test our algorithm on UCLIC dataset which consists of body motions of 13 non-professional actors portraying angry, fear, happy and sad emotions. We achieve an accuracy of 87.30% on this dataset.
Hybrid Orientation Based Human Limbs Motion Tracking Method
Glonek, Grzegorz; Wojciechowski, Adam
2017-01-01
One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%. PMID:29232832
Atallah, Vincent; Escarmant, Patrick; Vinh‐Hung, Vincent
2016-01-01
Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in‐house‐made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real‐time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high‐contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep‐breathing patterns. This low‐cost, computer‐vision system for real‐time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion. PACS number(s): 87.55.km PMID:27685116
Leduc, Nicolas; Atallah, Vincent; Escarmant, Patrick; Vinh-Hung, Vincent
2016-09-08
Monitoring and controlling respiratory motion is a challenge for the accuracy and safety of therapeutic irradiation of thoracic tumors. Various commercial systems based on the monitoring of internal or external surrogates have been developed but remain costly. In this article we describe and validate Madibreast, an in-house-made respiratory monitoring and processing device based on optical tracking of external markers. We designed an optical apparatus to ensure real-time submillimetric image resolution at 4 m. Using OpenCv libraries, we optically tracked high-contrast markers set on patients' breasts. Validation of spatial and time accuracy was performed on a mechanical phantom and on human breast. Madibreast was able to track motion of markers up to a 5 cm/s speed, at a frame rate of 30 fps, with submillimetric accuracy on mechanical phantom and human breasts. Latency was below 100 ms. Concomitant monitoring of three different locations on the breast showed discrepancies in axial motion up to 4 mm for deep-breathing patterns. This low-cost, computer-vision system for real-time motion monitoring of the irradiation of breast cancer patients showed submillimetric accuracy and acceptable latency. It allowed the authors to highlight differences in surface motion that may be correlated to tumor motion.v. © 2016 The Authors.
A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-01-01
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184
Muscle Motion Solenoid Actuator
NASA Astrophysics Data System (ADS)
Obata, Shuji
It is one of our dreams to mechanically recover the lost body for damaged humans. Realistic humanoid robots composed of such machines require muscle motion actuators controlled by all pulling actions. Particularly, antagonistic pairs of bi-articular muscles are very important in animal's motions. A system of actuators is proposed using the electromagnetic force of the solenoids with the abilities of the stroke length over 10 cm and the strength about 20 N, which are needed to move the real human arm. The devised actuators are based on developments of recent modern electro-magnetic materials, where old time materials can not give such possibility. Composite actuators are controlled by a high ability computer and software making genuine motions.
Full-motion video analysis for improved gender classification
NASA Astrophysics Data System (ADS)
Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.
2014-06-01
The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
A four-dimensional motion field atlas of the tongue from tagged and cine magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Prince, Jerry L.; Stone, Maureen; Wedeen, Van J.; El Fakhri, Georges; Woo, Jonghye
2017-02-01
Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.
A Four-dimensional Motion Field Atlas of the Tongue from Tagged and Cine Magnetic Resonance Imaging.
Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Wedeen, Van J; Fakhri, Georges El; Woo, Jonghye
2017-01-01
Representation of human tongue motion using three-dimensional vector fields over time can be used to better understand tongue function during speech, swallowing, and other lingual behaviors. To characterize the inter-subject variability of the tongue's shape and motion of a population carrying out one of these functions it is desirable to build a statistical model of the four-dimensional (4D) tongue. In this paper, we propose a method to construct a spatio-temporal atlas of tongue motion using magnetic resonance (MR) images acquired from fourteen healthy human subjects. First, cine MR images revealing the anatomical features of the tongue are used to construct a 4D intensity image atlas. Second, tagged MR images acquired to capture internal motion are used to compute a dense motion field at each time frame using a phase-based motion tracking method. Third, motion fields from each subject are pulled back to the cine atlas space using the deformation fields computed during the cine atlas construction. Finally, a spatio-temporal motion field atlas is created to show a sequence of mean motion fields and their inter-subject variation. The quality of the atlas was evaluated by deforming cine images in the atlas space. Comparison between deformed and original cine images showed high correspondence. The proposed method provides a quantitative representation to observe the commonality and variability of the tongue motion field for the first time, and shows potential in evaluation of common properties such as strains and other tensors based on motion fields.
Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †
Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco
2016-01-01
Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394
Finite element analysis of moment-rotation relationships for human cervical spine.
Zhang, Qing Hang; Teo, Ee Chon; Ng, Hong Wan; Lee, Vee Sin
2006-01-01
A comprehensive, geometrically accurate, nonlinear C0-C7 FE model of head and cervical spine based on the actual geometry of a human cadaver specimen was developed. The motions of each cervical vertebral level under pure moment loading of 1.0 Nm applied incrementally on the skull to simulate the movements of the head and cervical spine under flexion, tension, axial rotation and lateral bending with the inferior surface of the C7 vertebral body fully constrained were analysed. The predicted range of motion (ROM) for each motion segment were computed and compared with published experimental data. The model predicted the nonlinear moment-rotation relationship of human cervical spine. Under the same loading magnitude, the model predicted the largest rotation in extension, followed by flexion and axial rotation, and least ROM in lateral bending. The upper cervical spines are more flexible than the lower cervical levels. The motions of the two uppermost motion segments account for half (or even higher) of the whole cervical spine motion under rotational loadings. The differences in the ROMs among the lower cervical spines (C3-C7) were relatively small. The FE predicted segmental motions effectively reflect the behavior of human cervical spine and were in agreement with the experimental data. The C0-C7 FE model offers potentials for biomedical and injury studies.
In-vehicle group activity modeling and simulation in sensor-based virtual environment
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.
NASA Astrophysics Data System (ADS)
Radzicki, Vincent R.; Boutte, David; Taylor, Paul; Lee, Hua
2017-05-01
Radar based detection of human targets behind walls or in dense urban environments is an important technical challenge with many practical applications in security, defense, and disaster recovery. Radar reflections from a human can be orders of magnitude weaker than those from objects encountered in urban settings such as walls, cars, or possibly rubble after a disaster. Furthermore, these objects can act as secondary reflectors and produce multipath returns from a person. To mitigate these issues, processing of radar return data needs to be optimized for recognizing human motion features such as walking, running, or breathing. This paper presents a theoretical analysis on the modulation effects human motion has on the radar waveform and how high levels of multipath can distort these motion effects. From this analysis, an algorithm is designed and optimized for tracking human motion in heavily clutter environments. The tracking results will be used as the fundamental detection/classification tool to discriminate human targets from others by identifying human motion traits such as predictable walking patterns and periodicity in breathing rates. The theoretical formulations will be tested against simulation and measured data collected using a low power, portable see-through-the-wall radar system that could be practically deployed in real-world scenarios. Lastly, the performance of the algorithm is evaluated in a series of experiments where both a single person and multiple people are moving in an indoor, cluttered environment.
The Vestibular System and Human Dynamic Space Orientation
NASA Technical Reports Server (NTRS)
Meiry, J. L.
1966-01-01
The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.
Traffic and Driving Simulator Based on Architecture of Interactive Motion.
Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza
2015-01-01
This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.
Traffic and Driving Simulator Based on Architecture of Interactive Motion
Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza
2015-01-01
This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711
NASA Astrophysics Data System (ADS)
Chen, Xi; He, Jian; Song, Linlin; Zhang, Zengxing; Tian, Zhumei; Wen, Tao; Zhai, Cong; Chen, Yi; Cho, Jundong; Chou, Xiujian; Xue, Chenyang
2018-04-01
Triboelectric nanogenerators are widely used because of low cost, simple manufacturing process and high output performance. In this work, a flexible one-structure arched triboelectric nanogenerator (FOAT), based on common electrode to combine the single-electrode mode and contact-separation, was designed using silicone rubber, epoxy resin and flexible electrode. The peak-to-peak short circuit current of 18μ A and the peak-to-peak open circuit voltage of 570V can be obtained from the FOAT with the size of 5×7 cm2 under the frequency of 3Hz and the pressure of 300N. The peak-to-peak short circuit current of FOAT is increased by 29% and 80%, and the peak-to-peak open circuit voltage is increased by 33% and 54% compared with single-electrode mode and contact-separation mode, respectively. FOAT realizes the combination of two generation modes, which improves the output performance of triboelectric nanogenerator (TENG). 62 light-emitting-diodes (LEDs) can be completely lit up and 2.2μ F capacitor can be easily charged to 1.2V in 9s. When the FOAT is placed at different parts of the human body, the human motion energy can be harvested and be the sensing signal for motion monitoring sensor. Based on the above characteristics, FOAT exhibits great potential in illumination, power supplies for wearable electronic devices and self-powered motion monitoring sensor via harvesting the energy of human motion.
Observation and analysis of high-speed human motion with frequent occlusion in a large area
NASA Astrophysics Data System (ADS)
Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng
2009-12-01
The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.
Takei, Yuichiro; Katsuta, Hiroki; Takizawa, Kenichi; Ikegami, Tetsushi; Hamaguchi, Kiyoshi
2012-01-01
This paper presents an experimental evaluation of communication during human walking motion, using the medium access control (MAC) evaluation system for a prototype ultra-wideband (UWB) based wireless body area network for suitable MAC parameter settings for data transmission. Its physical layer and MAC specifications are based on the draft standard in IEEE802.15.6. This paper studies the effects of the number of retransmissions and the number of commands of GTS (guaranteed time slot) request packets in the CAP (contention access period) during human walking motion by varying the number of sensor nodes or the number of CFP (contention free period) slots in the superframe. The experiments were performed in an anechoic chamber. The number of packets received is decreased by packet loss caused by human walking motion in the case where 2 slots are set for CFP, regardless of the number of nodes, and this materially decreases the total number of packets received. The number of retransmissions and the GTS request commands increase according to increases in the number of nodes, largely reflecting the effects of the number of CFP slots in the case where 4 nodes are attached. In the cases where 2 or 3 nodes are attached and 4 slots are set for CFP, the packet transmission rate is more than 95%. In the case where 4 nodes are attached and 6 slots are set for CFP, the packet transmission rate is reduced to 88% at best.
Wearable strain sensors based on thin graphite films for human activity monitoring
NASA Astrophysics Data System (ADS)
Saito, Takanari; Kihara, Yusuke; Shirakashi, Jun-ichi
2017-12-01
Wearable health-monitoring devices have attracted increasing attention in disease diagnosis and health assessment. In many cases, such devices have been prepared by complicated multistep procedures which result in the waste of materials and require expensive facilities. In this study, we focused on pyrolytic graphite sheet (PGS), which is a low-cost, simple, and flexible material, used as wearable devices for monitoring human activity. We investigated wearable devices based on PGSs for the observation of elbow and finger motions. The thin graphite films were fabricated by cutting small films from PGSs. The wearable devices were then made from the thin graphite films assembled on a commercially available rubber glove. The human motions could be observed using the wearable devices. Therefore, these results suggested that the wearable devices based on thin graphite films may broaden their application in cost-effective wearable electronics for the observation of human activity.
Gao, Yang; Fang, Xiaoliang; Tan, Jianping; Lu, Ting; Pan, Likun; Xuan, Fuzhen
2018-06-08
Wearable strain sensors based on nanomaterial/elastomer composites have potential applications in flexible electronic skin, human motion detection, human-machine interfaces, etc. In this research, a type of high performance strain sensors has been developed using fragmentized carbon nanotube/polydimethylsiloxane (CNT/PDMS) composites. The CNT/PDMS composites were ground into fragments, and a liquid-induced densification method was used to fabricate the strain sensors. The strain sensors showed high sensitivity with gauge factors (GFs) larger than 200 and a broad strain detection range up to 80%, much higher than those strain sensors based on unfragmentized CNT/PDMS composites (GF < 1). The enhanced sensitivity of the strain sensors is ascribed to the sliding of individual fragmentized-CNT/PDMS-composite particles during mechanical deformation, which causes significant resistance change in the strain sensors. The strain sensors can differentiate mechanical stimuli and monitor various human body motions, such as bending of the fingers, human breathing, and blood pulsing.
3D Human Motion Editing and Synthesis: A Survey
Wang, Xin; Chen, Qiudi; Wang, Wanliang
2014-01-01
The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395
An octahedral shear strain-based measure of SNR for 3D MR elastography
NASA Astrophysics Data System (ADS)
McGarry, M. D. J.; Van Houten, E. E. W.; Perriñez, P. R.; Pattison, A. J.; Weaver, J. B.; Paulsen, K. D.
2011-07-01
A signal-to-noise ratio (SNR) measure based on the octahedral shear strain (the maximum shear strain in any plane for a 3D state of strain) is presented for magnetic resonance elastography (MRE), where motion-based SNR measures are commonly used. The shear strain, γ, is directly related to the shear modulus, μ, through the definition of shear stress, τ = μγ. Therefore, noise in the strain is the important factor in determining the quality of motion data, rather than the noise in the motion. Motion and strain SNR measures were found to be correlated for MRE of gelatin phantoms and the human breast. Analysis of the stiffness distributions of phantoms reconstructed from the measured motion data revealed a threshold for both strain and motion SNR where MRE stiffness estimates match independent mechanical testing. MRE of the feline brain showed significantly less correlation between the two SNR measures. The strain SNR measure had a threshold above which the reconstructed stiffness values were consistent between cases, whereas the motion SNR measure did not provide a useful threshold, primarily due to rigid body motion effects.
Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H
2012-05-10
Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.
Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling.
Ding, Meng; Fan, Guolian
2015-11-01
We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.
Takano, Wataru; Kusajima, Ikuo; Nakamura, Yoshihiko
2016-08-01
It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
Self-evaluation on Motion Adaptation for Service Robots
NASA Astrophysics Data System (ADS)
Funabora, Yuki; Yano, Yoshikazu; Doki, Shinji; Okuma, Shigeru
We suggest self motion evaluation method to adapt to environmental changes for service robots. Several motions such as walking, dancing, demonstration and so on are described with time series patterns. These motions are optimized with the architecture of the robot and under certain surrounding environment. Under unknown operating environment, robots cannot accomplish their tasks. We propose autonomous motion generation techniques based on heuristic search with histories of internal sensor values. New motion patterns are explored under unknown operating environment based on self-evaluation. Robot has some prepared motions which realize the tasks under the designed environment. Internal sensor values observed under the designed environment with prepared motions show the interaction results with the environment. Self-evaluation is composed of difference of internal sensor values between designed environment and unknown operating environment. Proposed method modifies the motions to synchronize the interaction results on both environment. New motion patterns are generated to maximize self-evaluation function without external information, such as run length, global position of robot, human observation and so on. Experimental results show that the possibility to adapt autonomously patterned motions to environmental changes.
Force reflecting hand controller
NASA Technical Reports Server (NTRS)
Mcaffee, Douglas A. (Inventor); Snow, Edward R. (Inventor); Townsend, William T. (Inventor)
1993-01-01
A universal input device for interfacing a human operator with a slave machine such as a robot or the like includes a plurality of serially connected mechanical links extending from a base. A handgrip is connected to the mechanical links distal from the base such that a human operator may grasp the handgrip and control the position thereof relative to the base through the mechanical links. A plurality of rotary joints is arranged to connect the mechanical links together to provide at least three translational degrees of freedom and at least three rotational degrees of freedom of motion of the handgrip relative to the base. A cable and pulley assembly for each joint is connected to a corresponding motor for transmitting forces from the slave machine to the handgrip to provide kinesthetic feedback to the operator and for producing control signals that may be transmitted from the handgrip to the slave machine. The device gives excellent kinesthetic feedback, high-fidelity force/torque feedback, a kinematically simple structure, mechanically decoupled motion in all six degrees of freedom, and zero backlash. The device also has a much larger work envelope, greater stiffness and responsiveness, smaller stowage volume, and better overlap of the human operator's range of motion than previous designs.
NASA Astrophysics Data System (ADS)
Smyczynski, Mark S.; Gifford, Howard C.; Dey, Joyoni; Lehovich, Andre; McNamara, Joseph E.; Segars, W. Paul; King, Michael A.
2016-02-01
The objective of this investigation was to determine the effectiveness of three motion reducing strategies in diminishing the degrading impact of respiratory motion on the detection of small solitary pulmonary nodules (SPNs) in single-photon emission computed tomographic (SPECT) imaging in comparison to a standard clinical acquisition and the ideal case of imaging in the absence of respiratory motion. To do this nonuniform rational B-spline cardiac-torso (NCAT) phantoms based on human-volunteer CT studies were generated spanning the respiratory cycle for a normal background distribution of Tc-99 m NeoTect. Similarly, spherical phantoms of 1.0-cm diameter were generated to model small SPN for each of the 150 uniquely located sites within the lungs whose respiratory motion was based on the motion of normal structures in the volunteer CT studies. The SIMIND Monte Carlo program was used to produce SPECT projection data from these. Normal and single-lesion containing SPECT projection sets with a clinically realistic Poisson noise level were created for the cases of 1) the end-expiration (EE) frame with all counts, 2) respiration-averaged motion with all counts, 3) one fourth of the 32 frames centered around EE (Quarter Binning), 4) one half of the 32 frames centered around EE (Half Binning), and 5) eight temporally binned frames spanning the respiratory cycle. Each of the sets of combined projection data were reconstructed with RBI-EM with system spatial-resolution compensation (RC). Based on the known motion for each of the 150 different lesions, the reconstructed volumes of respiratory bins were shifted so as to superimpose the locations of the SPN onto that in the first bin (Reconstruct and Shift). Five human observers performed localization receiver operating characteristics (LROC) studies of SPN detection. The observer results were analyzed for statistical significance differences in SPN detection accuracy among the three correction strategies, the standard acquisition, and the ideal case of the absence of respiratory motion. Our human-observer LROC determined that Quarter Binning and Half Binning strategies resulted in SPN detection accuracy statistically significantly below ( ) that of standard clinical acquisition, whereas the Reconstruct and Shift strategy resulted in a detection accuracy not statistically significantly different from that of the ideal case. This investigation demonstrates that tumor detection based on acquisitions associated with less than all the counts which could potentially be employed may result in poorer detection despite limiting the motion of the lesion. The Reconstruct and Shift method results in tumor detection that is equivalent to ideal motion correction.
Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design
NASA Technical Reports Server (NTRS)
Newman, Dava
2003-01-01
The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.
Simultaneous estimation of human and exoskeleton motion: A simplified protocol.
Alvarez, M T; Torricelli, D; Del-Ama, A J; Pinto, D; Gonzalez-Vargas, J; Moreno, J C; Gil-Agudo, A; Pons, J L
2017-07-01
Adequate benchmarking procedures in the area of wearable robots is gaining importance in order to compare different devices on a quantitative basis, improve them and support the standardization and regulation procedures. Performance assessment usually focuses on the execution of locomotion tasks, and is mostly based on kinematic-related measures. Typical drawbacks of marker-based motion capture systems, gold standard for measure of human limb motion, become challenging when measuring limb kinematics, due to the concomitant presence of the robot. This work answers the question of how to reliably assess the subject's body motion by placing markers over the exoskeleton. Focusing on the ankle joint, the proposed methodology showed that it is possible to reconstruct the trajectory of the subject's joint by placing markers on the exoskeleton, although foot flexibility during walking can impact the reconstruction accuracy. More experiments are needed to confirm this hypothesis, and more subjects and walking conditions are needed to better characterize the errors of the proposed methodology, although our results are promising, indicating small errors.
Huo, Xueliang; Park, Hangue; Kim, Jeonghee; Ghovanloo, Maysam
2015-01-01
We are presenting a new wireless and wearable human computer interface called the dual-mode Tongue Drive System (dTDS), which is designed to allow people with severe disabilities to use computers more effectively with increased speed, flexibility, usability, and independence through their tongue motion and speech. The dTDS detects users’ tongue motion using a magnetic tracer and an array of magnetic sensors embedded in a compact and ergonomic wireless headset. It also captures the users’ voice wirelessly using a small microphone embedded in the same headset. Preliminary evaluation results based on 14 able-bodied subjects and three individuals with high level spinal cord injuries at level C3–C5 indicated that the dTDS headset, combined with a commercially available speech recognition (SR) software, can provide end users with significantly higher performance than either unimodal forms based on the tongue motion or speech alone, particularly in completing tasks that require both pointing and text entry. PMID:23475380
Freestanding Triboelectric Nanogenerator Enables Noncontact Motion-Tracking and Positioning.
Guo, Huijuan; Jia, Xueting; Liu, Lue; Cao, Xia; Wang, Ning; Wang, Zhong Lin
2018-04-24
Recent development of interactive motion-tracking and positioning technologies is attracting increasing interests in many areas, such as wearable electronics, intelligent electronics, and the internet of things. For example, the so-called somatosensory technology can afford users strong empathy of immersion and realism due to their consistent interaction with the game. Here, we report a noncontact self-powered positioning and motion-tracking system based on a freestanding triboelectric nanogenerator (TENG). The TENG was fabricated by a nanoengineered surface in the contact-separation mode with the use of a free moving human body (hands or feet) as the trigger. The poly(tetrafluoroethylene) (PTFE) arrays based interactive interface can give an output of 222 V from casual human motions. Different from previous works, this device also responses to a small action at certain heights of 0.01-0.11 m from the device with a sensitivity of about 315 V·m -1 , so that the mechanical sensing is possible. Such a distinctive noncontact sensing feature promotes a wide range of potential applications in smart interaction systems.
Autonomous Motion Learning for Intra-Vehicular Activity Space Robot
NASA Astrophysics Data System (ADS)
Watanabe, Yutaka; Yairi, Takehisa; Machida, Kazuo
Space robots will be needed in the future space missions. So far, many types of space robots have been developed, but in particular, Intra-Vehicular Activity (IVA) space robots that support human activities should be developed to reduce human-risks in space. In this paper, we study the motion learning method of an IVA space robot with the multi-link mechanism. The advantage point is that this space robot moves using reaction force of the multi-link mechanism and contact forces from the wall as space walking of an astronaut, not to use a propulsion. The control approach is determined based on a reinforcement learning with the actor-critic algorithm. We demonstrate to clear effectiveness of this approach using a 5-link space robot model by simulation. First, we simulate that a space robot learn the motion control including contact phase in two dimensional case. Next, we simulate that a space robot learn the motion control changing base attitude in three dimensional case.
NASA Astrophysics Data System (ADS)
Mi, Qing; Wang, Qi; Zang, Siyao; Chai, Zhaoer; Zhang, Jinnan; Ren, Xiaomin
2018-05-01
In this study, we developed a multifunctional device based on SnO2@rGO-coated fibers utilizing plasma treatment, dip coating, and microwave irradiation in sequence, and finally realized highly sensitive human motion monitoring, relatively good ethanol detection, and an obvious photo response. Moreover, the high level of comfort and compactness derived from highly elastic and comfortable fabrics contributes to the long-term availability and test accuracy. As an attempt at multifunctional integration of smart clothing, this work provides an attractive and relatively practical research direction.
Mi, Qing; Wang, Qi; Zang, Siyao; Chai, Zhaoer; Zhang, Jinnan; Ren, Xiaomin
2018-05-11
In this study, we developed a multifunctional device based on SnO 2 @rGO-coated fibers utilizing plasma treatment, dip coating, and microwave irradiation in sequence, and finally realized highly sensitive human motion monitoring, relatively good ethanol detection, and an obvious photo response. Moreover, the high level of comfort and compactness derived from highly elastic and comfortable fabrics contributes to the long-term availability and test accuracy. As an attempt at multifunctional integration of smart clothing, this work provides an attractive and relatively practical research direction.
Holowka, Nicholas B; O'Neill, Matthew C; Thompson, Nathan E; Demes, Brigitte
2017-03-01
The longitudinal arch of the human foot is commonly thought to reduce midfoot joint motion to convert the foot into a rigid lever during push off in bipedal walking. In contrast, African apes have been observed to exhibit midfoot dorsiflexion following heel lift during terrestrial locomotion, presumably due to their possession of highly mobile midfoot joints. This assumed dichotomy between human and African ape midfoot mobility has recently been questioned based on indirect assessments of in vivo midfoot motion, such as plantar pressure and cadaver studies; however, direct quantitative analyses of African ape midfoot kinematics during locomotion remain scarce. Here, we used high-speed motion capture to measure three-dimensional foot kinematics in two male chimpanzees and five male humans walking bipedally at similar dimensionless speeds. We analyzed 10 steps per chimpanzee subject and five steps per human subject, and compared ranges of midfoot motion between species over stance phase, as well as within double- and single-limb support periods. Contrary to expectations, humans used a greater average range of midfoot motion than chimpanzees over the full duration of stance. This difference was driven by humans' dramatic plantarflexion and adduction of the midfoot joints during the second double-limb support period, which likely helps the foot generate power during push off. However, chimpanzees did use slightly but significantly more midfoot dorsiflexion than humans in the single limb-support period, during which heel lift begins. These results indicate that both stiffness and mobility are important to longitudinal arch function, and that the human foot evolved to utilize both during push off in bipedal walking. Thus, the presence of human-like midfoot joint morphology in fossil hominins should not be taken as indicating foot rigidity, but may signify the evolution of pedal anatomy conferring enhanced push off mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Virtanen, Jaakko; Noponen, Tommi; Kotilahti, Kalle; Virtanen, Juha; Ilmoniemi, Risto J.
2011-08-01
In medical near-infrared spectroscopy (NIRS), movements of the subject often cause large step changes in the baselines of the measured light attenuation signals. This prevents comparison of hemoglobin concentration levels before and after movement. We present an accelerometer-based motion artifact removal (ABAMAR) algorithm for correcting such baseline motion artifacts (BMAs). ABAMAR can be easily adapted to various long-term monitoring applications of NIRS. We applied ABAMAR to NIRS data collected in 23 all-night sleep measurements and containing BMAs from involuntary movements during sleep. For reference, three NIRS researchers independently identified BMAs from the data. To determine whether the use of an accelerometer improves BMA detection accuracy, we compared ABAMAR to motion detection based on peaks in the moving standard deviation (SD) of NIRS data. The number of BMAs identified by ABAMAR was similar to the number detected by the humans, and 79% of the artifacts identified by ABAMAR were confirmed by at least two humans. While the moving SD of NIRS data could also be used for motion detection, on average 2 out of the 10 largest SD peaks in NIRS data each night occurred without the presence of movement. Thus, using an accelerometer improves BMA detection accuracy in NIRS.
Development of a parametric kinematic model of the human hand and a novel robotic exoskeleton.
Burton, T M W; Vaidyanathan, R; Burgess, S C; Turton, A J; Melhuish, C
2011-01-01
This paper reports the integration of a kinematic model of the human hand during cylindrical grasping, with specific focus on the accurate mapping of thumb movement during grasping motions, and a novel, multi-degree-of-freedom assistive exoskeleton mechanism based on this model. The model includes thumb maximum hyper-extension for grasping large objects (~> 50 mm). The exoskeleton includes a novel four-bar mechanism designed to reproduce natural thumb opposition and a novel synchro-motion pulley mechanism for coordinated finger motion. A computer aided design environment is used to allow the exoskeleton to be rapidly customized to the hand dimensions of a specific patient. Trials comparing the kinematic model to observed data of hand movement show the model to be capable of mapping thumb and finger joint flexion angles during grasping motions. Simulations show the exoskeleton to be capable of reproducing the complex motion of the thumb to oppose the fingers during cylindrical and pinch grip motions. © 2011 IEEE
Human heart rate variability relation is unchanged during motion sickness
NASA Technical Reports Server (NTRS)
Mullen, T. J.; Berger, R. D.; Oman, C. M.; Cohen, R. J.
1998-01-01
In a study of 18 human subjects, we applied a new technique, estimation of the transfer function between instantaneous lung volume (ILV) and instantaneous heart rate (HR), to assess autonomic activity during motion sickness. Two control recordings of ILV and electrocardiogram (ECG) were made prior to the development of motion sickness. During the first, subjects were seated motionless, and during the second they were seated rotating sinusoidally about an earth vertical axis. Subjects then wore prism goggles that reverse the left-right visual field and performed manual tasks until they developed moderate motion sickness. Finally, ILV and ECG were recorded while subjects maintained a relatively constant level of sickness by intermittent eye closure during rotation with the goggles. Based on analyses of ILV to HR transfer functions from the three conditions, we were unable to demonstrate a change in autonomic control of heart rate due to rotation alone or due to motion sickness. These findings do not support the notion that moderate motion sickness is manifested as a generalized autonomic response.
Time-frequency analysis of human motion during rhythmic exercises.
Omkar, S N; Vyas, Khushi; Vikranth, H N
2011-01-01
Biomechanical signals due to human movements during exercise are represented in time-frequency domain using Wigner Distribution Function (WDF). Analysis based on WDF reveals instantaneous spectral and power changes during a rhythmic exercise. Investigations were carried out on 11 healthy subjects who performed 5 cycles of sun salutation, with a body-mounted Inertial Measurement Unit (IMU) as a motion sensor. Variance of Instantaneous Frequency (I.F) and Instantaneous Power (I.P) for performance analysis of the subject is estimated using one-way ANOVA model. Results reveal that joint Time-Frequency analysis of biomechanical signals during motion facilitates a better understanding of grace and consistency during rhythmic exercise.
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
Numerical simulation of artificial hip joint motion based on human age factor
NASA Astrophysics Data System (ADS)
Ramdhani, Safarudin; Saputra, Eko; Jamari, J.
2018-05-01
Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.
Beil, Jonas; Marquardt, Charlotte; Asfour, Tamim
2017-07-01
Kinematic compatibility is of paramount importance in wearable robotic and exoskeleton design. Misalignments between exoskeletons and anatomical joints of the human body result in interaction forces which make wearing the exoskeleton uncomfortable and even dangerous for the human. In this paper we present a kinematically compatible design of an exoskeleton hip to reduce kinematic incompatibilities, so called macro- and micro-misalignments, between the human's and exoskeleton's joint axes, which are caused by inter-subject variability and articulation. The resulting design consists of five revolute, three prismatic and one ball joint. Design parameters such as range of motion and joint velocities are calculated based on the analysis of human motion data acquired by motion capture systems. We show that the resulting design is capable of self-aligning to the human hip joint in all three anatomical planes during operation and can be adapted along the dorsoventral and mediolateral axis prior to operation. Calculation of the forward kinematics and FEM-simulation considering kinematic and musculoskeletal constraints proved sufficient mobility and stiffness of the system regarding the range of motion, angular velocity and torque admissibility needed to provide 50 % assistance for an 80 kg person.
Individualistic weight perception from motion on a slope
Zintus-art, K.; Shin, D.; Kambara, H.; Yoshimura, N.; Koike, Y.
2016-01-01
Perception of an object’s weight is linked to its form and motion. Studies have shown the relationship between weight perception and motion in horizontal and vertical environments to be universally identical across subjects during passive observation. Here we show a contradicting finding in that not all humans share the same motion-weight pairing. A virtual environment where participants control the steepness of a slope was used to investigate the relationship between sliding motion and weight perception. Our findings showed that distinct, albeit subjective, motion-weight relationships in perception could be identified for slope environments. These individualistic perceptions were found when changes in environmental parameters governing motion were introduced, specifically inclination and surface texture. Differences in environmental parameters, combined with individual factors such as experience, affected participants’ weight perception. This phenomenon may offer evidence of the central nervous system’s ability to choose and combine internal models based on information from the sensory system. The results also point toward the possibility of controlling human perception by presenting strong sensory cues to manipulate the mechanisms managing internal models. PMID:27174036
Phase-based motion magnification video for monitoring of vital signals using the Hermite transform
NASA Astrophysics Data System (ADS)
Brieva, Jorge; Moya-Albor, Ernesto
2017-11-01
In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.
The KIT Motion-Language Dataset.
Plappert, Matthias; Mandery, Christian; Asfour, Tamim
2016-12-01
Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.
Ito, Norie; Barnes, Graham R; Fukushima, Junko; Fukushima, Kikuro; Warabi, Tateo
2013-08-01
Using a cue-dependent memory-based smooth-pursuit task previously applied to monkeys, we examined the effects of visual motion-memory on smooth-pursuit eye movements in normal human subjects and compared the results with those of the trained monkeys. These results were also compared with those during simple ramp-pursuit that did not require visual motion-memory. During memory-based pursuit, all subjects exhibited virtually no errors in either pursuit-direction or go/no-go selection. Tracking eye movements of humans and monkeys were similar in the two tasks, but tracking eye movements were different between the two tasks; latencies of the pursuit and corrective saccades were prolonged, initial pursuit eye velocity and acceleration were lower, peak velocities were lower, and time to reach peak velocities lengthened during memory-based pursuit. These characteristics were similar to anticipatory pursuit initiated by extra-retinal components during the initial extinction task of Barnes and Collins (J Neurophysiol 100:1135-1146, 2008b). We suggest that the differences between the two tasks reflect differences between the contribution of extra-retinal and retinal components. This interpretation is supported by two further studies: (1) during popping out of the correct spot to enhance retinal image-motion inputs during memory-based pursuit, pursuit eye velocities approached those during simple ramp-pursuit, and (2) during initial blanking of spot motion during memory-based pursuit, pursuit components appeared in the correct direction. Our results showed the importance of extra-retinal mechanisms for initial pursuit during memory-based pursuit, which include priming effects and extra-retinal drive components. Comparison with monkey studies on neuronal responses and model analysis suggested possible pathways for the extra-retinal mechanisms.
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Wearable carbon nanotube-based fabric sensors for monitoring human physiological performance
NASA Astrophysics Data System (ADS)
Wang, Long; Loh, Kenneth J.
2017-05-01
A target application of wearable sensors is to detect human motion and to monitor physical activity for improving athletic performance and for delivering better physical therapy. In addition, measuring human vital signals (e.g., respiration rate and body temperature) provides rich information that can be used to assess a subject’s physiological or psychological condition. This study aims to design a multifunctional, wearable, fabric-based sensing system. First, carbon nanotube (CNT)-based thin films were fabricated by spraying. Second, the thin films were integrated with stretchable fabrics to form the fabric sensors. Third, the strain and temperature sensing properties of sensors fabricated using different CNT concentrations were characterized. Furthermore, the sensors were demonstrated to detect human finger bending motions, so as to validate their practical strain sensing performance. Finally, to monitor human respiration, the fabric sensors were integrated with a chest band, which was directly worn by a human subject. Quantification of respiration rates were successfully achieved. Overall, the fabric sensors were characterized by advantages such as flexibility, ease of fabrication, lightweight, low-cost, noninvasiveness, and user comfort.
An Exoskeleton Robot for Human Forearm and Wrist Motion Assist
NASA Astrophysics Data System (ADS)
Ranathunga Arachchilage Ruwan Chandra Gopura; Kiguchi, Kazuo
The exoskeleton robot is worn by the human operator as an orthotic device. Its joints and links correspond to those of the human body. The same system operated in different modes can be used for different fundamental applications; a human-amplifier, haptic interface, rehabilitation device and assistive device sharing a portion of the external load with the operator. We have been developing exoskeleton robots for assisting the motion of physically weak individuals such as elderly or slightly disabled in daily life. In this paper, we propose a three degree of freedom (3DOF) exoskeleton robot (W-EXOS) for the forearm pronation/ supination motion, wrist flexion/extension motion and ulnar/radial deviation. The paper describes the wrist anatomy toward the development of the exoskeleton robot, the hardware design of the exoskeleton robot and EMG-based control method. The skin surface electromyographic (EMG) signals of muscles in forearm of the exoskeletons' user and the hand force/forearm torque are used as input information for the controller. By applying the skin surface EMG signals as main input signals to the controller, automatic control of the robot can be realized without manipulating any other equipment. Fuzzy control method has been applied to realize the natural and flexible motion assist. Experiments have been performed to evaluate the proposed exoskeleton robot and its control method.
A computational model for reference-frame synthesis with applications to motion perception.
Clarke, Aaron M; Öğmen, Haluk; Herzog, Michael H
2016-09-01
As discovered by the Gestaltists, in particular by Duncker, we often perceive motion to be within a non-retinotopic reference frame. For example, the motion of a reflector on a bicycle appears to be circular, whereas, it traces out a cycloidal path with respect to external world coordinates. The reflector motion appears to be circular because the human brain subtracts the horizontal motion of the bicycle from the reflector motion. The bicycle serves as a reference frame for the reflector motion. Here, we present a general mathematical framework, based on vector fields, to explain non-retinotopic motion processing. Using four types of non-retinotopic motion paradigms, we show how the theory works in detail. For example, we show how non-retinotopic motion in the Ternus-Pikler display can be computed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments
Sun, Tongyang; Duan, Lihong; Wang, Yulong
2017-01-01
The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575
Robotic situational awareness of actions in human teaming
NASA Astrophysics Data System (ADS)
Tahmoush, Dave
2015-06-01
When robots can sense and interpret the activities of the people they are working with, they become more of a team member and less of just a piece of equipment. This has motivated work on recognizing human actions using existing robotic sensors like short-range ladar imagers. These produce three-dimensional point cloud movies which can be analyzed for structure and motion information. We skeletonize the human point cloud and apply a physics-based velocity correlation scheme to the resulting joint motions. The twenty actions are then recognized using a nearest-neighbors classifier that achieves good accuracy.
Real-time animation software for customized training to use motor prosthetic systems.
Davoodi, Rahman; Loeb, Gerald E
2012-03-01
Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.
On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.
Shao, Zhanpeng; Li, Youfu
2016-02-01
Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.
Applications of artificial intelligence in safe human-robot interactions.
Najmaei, Nima; Kermani, Mehrdad R
2011-04-01
The integration of industrial robots into the human workspace presents a set of unique challenges. This paper introduces a new sensory system for modeling, tracking, and predicting human motions within a robot workspace. A reactive control scheme to modify a robot's operations for accommodating the presence of the human within the robot workspace is also presented. To this end, a special class of artificial neural networks, namely, self-organizing maps (SOMs), is employed for obtaining a superquadric-based model of the human. The SOM network receives information of the human's footprints from the sensory system and infers necessary data for rendering the human model. The model is then used in order to assess the danger of the robot operations based on the measured as well as predicted human motions. This is followed by the introduction of a new reactive control scheme that results in the least interferences between the human and robot operations. The approach enables the robot to foresee an upcoming danger and take preventive actions before the danger becomes imminent. Simulation and experimental results are presented in order to validate the effectiveness of the proposed method.
Szczęsna, Agnieszka; Pruszowski, Przemysław
2016-01-01
Inertial orientation tracking is still an area of active research, especially in the context of out-door, real-time, human motion capture. Existing systems either propose loosely coupled tracking approaches where each segment is considered independently, taking the resulting drawbacks into account, or tightly coupled solutions that are limited to a fixed chain with few segments. Such solutions have no flexibility to change the skeleton structure, are dedicated to a specific set of joints, and have high computational complexity. This paper describes the proposal of a new model-based extended quaternion Kalman filter that allows for estimation of orientation based on outputs from the inertial measurements unit sensors. The filter considers interdependencies resulting from the construction of the kinematic chain so that the orientation estimation is more accurate. The proposed solution is a universal filter that does not predetermine the degree of freedom at the connections between segments of the model. To validation the motion of 3-segments single link pendulum captured by optical motion capture system is used. The next step in the research will be to use this method for inertial motion capture with a human skeleton model.
Peng, Zhen; Braun, Daniel A.
2015-01-01
In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896
Anthropomorphic Robot Hand And Teaching Glove
NASA Technical Reports Server (NTRS)
Engler, Charles D., Jr.
1991-01-01
Robotic forearm-and-hand assembly manipulates objects by performing wrist and hand motions with nearly human grasping ability and dexterity. Imitates hand motions of human operator who controls robot in real time by programming via exoskeletal "teaching glove". Telemanipulator systems based on this robotic-hand concept useful where humanlike dexterity required. Underwater, high-radiation, vacuum, hot, cold, toxic, or inhospitable environments potential application sites. Particularly suited to assisting astronauts on space station in safely executing unexpected tasks requiring greater dexterity than standard gripper.
Stegman, Kelly J; Park, Edward J; Dechev, Nikolai
2012-07-01
The motivation of this research is to non-invasively monitor the wrist tendon's displacement and velocity, for purposes of controlling a prosthetic device. This feasibility study aims to determine if the proposed technique using Doppler ultrasound is able to accurately estimate the tendon's instantaneous velocity and displacement. This study is conducted with a tendon mimicking experiment consisting of two different materials: a commercial ultrasound scanner, and a reference linear motion stage set-up. Audio-based output signals are acquired from the ultrasound scanner, and are processed with our proposed Fourier technique to obtain the tendon's velocity and displacement estimates. We then compare our estimates to an external reference system, and also to the ultrasound scanner's own estimates based on its proprietary software. The proposed tendon motion estimation method has been shown to be repeatable, effective and accurate in comparison to the external reference system, and is generally more accurate than the scanner's own estimates. After establishing this feasibility study, future testing will include cadaver-based studies to test the technique on the human arm tendon anatomy, and later on live human test subjects in order to further refine the proposed method for the novel purpose of detecting user-intended tendon motion for controlling wearable prosthetic devices.
NASA Astrophysics Data System (ADS)
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; E Carson, Richard
2017-06-01
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
Ren, Silin; Jin, Xiao; Chan, Chung; Jian, Yiqiang; Mulnix, Tim; Liu, Chi; Carson, Richard E
2017-06-21
Data-driven respiratory gating techniques were developed to correct for respiratory motion in PET studies, without the help of external motion tracking systems. Due to the greatly increased image noise in gated reconstructions, it is desirable to develop a data-driven event-by-event respiratory motion correction method. In this study, using the Centroid-of-distribution (COD) algorithm, we established a data-driven event-by-event respiratory motion correction technique using TOF PET list-mode data, and investigated its performance by comparing with an external system-based correction method. Ten human scans with the pancreatic β-cell tracer 18 F-FP-(+)-DTBZ were employed. Data-driven respiratory motions in superior-inferior (SI) and anterior-posterior (AP) directions were first determined by computing the centroid of all radioactive events during each short time frame with further processing. The Anzai belt system was employed to record respiratory motion in all studies. COD traces in both SI and AP directions were first compared with Anzai traces by computing the Pearson correlation coefficients. Then, respiratory gated reconstructions based on either COD or Anzai traces were performed to evaluate their relative performance in capturing respiratory motion. Finally, based on correlations of displacements of organ locations in all directions and COD information, continuous 3D internal organ motion in SI and AP directions was calculated based on COD traces to guide event-by-event respiratory motion correction in the MOLAR reconstruction framework. Continuous respiratory correction results based on COD were compared with that based on Anzai, and without motion correction. Data-driven COD traces showed a good correlation with Anzai in both SI and AP directions for the majority of studies, with correlation coefficients ranging from 63% to 89%. Based on the determined respiratory displacements of pancreas between end-expiration and end-inspiration from gated reconstructions, there was no significant difference between COD-based and Anzai-based methods. Finally, data-driven COD-based event-by-event respiratory motion correction yielded comparable results to that based on Anzai respiratory traces, in terms of contrast recovery and reduced motion-induced blur. Data-driven event-by-event respiratory motion correction using COD showed significant image quality improvement compared with reconstructions with no motion correction, and gave comparable results to the Anzai-based method.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Human joint motion estimation for electromyography (EMG)-based dynamic motion control.
Zhang, Qin; Hosoda, Ryo; Venture, Gentiane
2013-01-01
This study aims to investigate a joint motion estimation method from Electromyography (EMG) signals during dynamic movement. In most EMG-based humanoid or prosthetics control systems, EMG features were directly or indirectly used to trigger intended motions. However, both physiological and nonphysiological factors can influence EMG characteristics during dynamic movements, resulting in subject-specific, non-stationary and crosstalk problems. Particularly, when motion velocity and/or joint torque are not constrained, joint motion estimation from EMG signals are more challenging. In this paper, we propose a joint motion estimation method based on muscle activation recorded from a pair of agonist and antagonist muscles of the joint. A linear state-space model with multi input single output is proposed to map the muscle activity to joint motion. An adaptive estimation method is proposed to train the model. The estimation performance is evaluated in performing a single elbow flexion-extension movement in two subjects. All the results in two subjects at two load levels indicate the feasibility and suitability of the proposed method in joint motion estimation. The estimation root-mean-square error is within 8.3% ∼ 10.6%, which is lower than that being reported in several previous studies. Moreover, this method is able to overcome subject-specific problem and compensate non-stationary EMG properties.
Wearable Wide-Range Strain Sensors Based on Ionic Liquids and Monitoring of Human Activities
Zhang, Shao-Hui; Wang, Feng-Xia; Li, Jia-Jia; Peng, Hong-Dan; Yan, Jing-Hui; Pan, Ge-Bo
2017-01-01
Wearable sensors for detection of human activities have encouraged the development of highly elastic sensors. In particular, to capture subtle and large-scale body motion, stretchable and wide-range strain sensors are highly desired, but still a challenge. Herein, a highly stretchable and transparent stain sensor based on ionic liquids and elastic polymer has been developed. The as-obtained sensor exhibits impressive stretchability with wide-range strain (from 0.1% to 400%), good bending properties and high sensitivity, whose gauge factor can reach 7.9. Importantly, the sensors show excellent biological compatibility and succeed in monitoring the diverse human activities ranging from the complex large-scale multidimensional motions to subtle signals, including wrist, finger and elbow joint bending, finger touch, breath, speech, swallow behavior and pulse wave. PMID:29135928
Exploiting Motion Capture to Enhance Avoidance Behaviour in Games
NASA Astrophysics Data System (ADS)
van Basten, Ben J. H.; Jansen, Sander E. M.; Karamouzas, Ioannis
Realistic simulation of interacting virtual characters is essential in computer games, training and simulation applications. The problem is very challenging since people are accustomed to real-world situations and thus, they can easily detect inconsistencies and artifacts in the simulations. Over the past twenty years several models have been proposed for simulating individuals, groups and crowds of characters. However, little effort has been made to actually understand how humans solve interactions and avoid inter-collisions in real-life. In this paper, we exploit motion capture data to gain more insights into human-human interactions. We propose four measures to describe the collision-avoidance behavior. Based on these measures, we extract simple rules that can be applied on top of existing agent and force based approaches, increasing the realism of the resulting simulations.
NASA Astrophysics Data System (ADS)
Gao, Yang; Fang, Xiaoliang; Tan, Jianping; Lu, Ting; Pan, Likun; Xuan, Fuzhen
2018-06-01
Wearable strain sensors based on nanomaterial/elastomer composites have potential applications in flexible electronic skin, human motion detection, human–machine interfaces, etc. In this research, a type of high performance strain sensors has been developed using fragmentized carbon nanotube/polydimethylsiloxane (CNT/PDMS) composites. The CNT/PDMS composites were ground into fragments, and a liquid-induced densification method was used to fabricate the strain sensors. The strain sensors showed high sensitivity with gauge factors (GFs) larger than 200 and a broad strain detection range up to 80%, much higher than those strain sensors based on unfragmentized CNT/PDMS composites (GF < 1). The enhanced sensitivity of the strain sensors is ascribed to the sliding of individual fragmentized-CNT/PDMS-composite particles during mechanical deformation, which causes significant resistance change in the strain sensors. The strain sensors can differentiate mechanical stimuli and monitor various human body motions, such as bending of the fingers, human breathing, and blood pulsing.
Motion-Based Immunological Detection of Zika Virus Using Pt-Nanomotors and a Cellphone.
Draz, Mohamed Shehata; Lakshminaraasimulu, Nivethitha Kota; Krishnakumar, Sanchana; Battalapalli, Dheerendranath; Vasan, Anish; Kanakasabapathy, Manoj Kumar; Sreeram, Aparna; Kallakuri, Shantanu; Thirumalaraju, Prudhvi; Li, Yudong; Hua, Stephane; Yu, Xu G; Kuritzkes, Daniel R; Shafiee, Hadi
2018-05-16
Zika virus (ZIKV) infection is an emerging pandemic threat to humans that can be fatal in newborns. Advances in digital health systems and nanoparticles can facilitate the development of sensitive and portable detection technologies for timely management of emerging viral infections. Here we report a nanomotor-based bead-motion cellphone (NBC) system for the immunological detection of ZIKV. The presence of virus in a testing sample results in the accumulation of platinum (Pt)-nanomotors on the surface of beads, causing their motion in H 2 O 2 solution. Then the virus concentration is detected in correlation with the change in beads motion. The developed NBC system was capable of detecting ZIKV in samples with virus concentrations as low as 1 particle/μL. The NBC system allowed a highly specific detection of ZIKV in the presence of the closely related dengue virus and other neurotropic viruses, such as herpes simplex virus type 1 and human cytomegalovirus. The NBC platform technology has the potential to be used in the development of point-of-care diagnostics for pathogen detection and disease management in developed and developing countries.
User-Independent Motion State Recognition Using Smartphone Sensors.
Gu, Fuqiang; Kealy, Allison; Khoshelham, Kourosh; Shang, Jianga
2015-12-04
The recognition of locomotion activities (e.g., walking, running, still) is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users' data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people's motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human's motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy.
Page, Alvaro; de Rosario, Helios; Gálvez, José A; Mata, Vicente
2011-02-24
We propose to model planar movements between two human segments by means of rolling-without-slipping kinematic pairs. We compute the path traced by the instantaneous center of rotation (ICR) as seen from the proximal and distal segments, thus obtaining the fixed and moving centrodes, respectively. The joint motion is then represented by the rolling-without-slipping of one centrode on the other. The resulting joint kinematic model is based on the real movement and accounts for nonfixed axes of rotation; therefore it could improve current models based on revolute pairs in those cases where joint movement implies displacement of the ICR. Previous authors have used the ICR to characterize human joint motion, but they only considered the fixed centrode. Such an approach is not adequate for reproducing motion because the fixed centrode by itself does not convey information about body position. The combination of the fixed and moving centrodes gathers the kinematic information needed to reproduce the position and velocities of moving bodies. To illustrate our method, we applied it to the flexion-extension movement of the head relative to the thorax. The model provides a good estimation of motion both for position variables (mean R(pos)=0.995) and for velocities (mean R(vel)=0.958). This approach is more realistic than other models of neck motion based on revolute pairs, such as the dual-pivot model. The geometry of the centrodes can provide some information about the nature of the movement. For instance, the ascending and descending curves of the fixed centrode suggest a sequential movement of the cervical vertebrae. Copyright © 2010 Elsevier Ltd. All rights reserved.
Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.
2018-01-01
Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918
Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A
2018-04-01
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Assessing Motion Induced Interruptions Using a Motion Platform
2013-09-01
same way that cars have shock absorbers to decrease jolt from potholes and bumps in the road, ships may have the potential to be designed to better...Integration (HSI) seeks to assure human performance to reduce operating costs. This thesis seeks to develop a model for ship design in relation to Motion...Induced Interruptions (MII). The model is based on the premise that MIIs affect specific domains of HSI in an adverse way. Future ship design
Human motion retrieval from hand-drawn sketch.
Chao, Min-Wen; Lin, Chao-Hung; Assa, Jackie; Lee, Tong-Yee
2012-05-01
The rapid growth of motion capture data increases the importance of motion retrieval. The majority of the existing motion retrieval approaches are based on a labor-intensive step in which the user browses and selects a desired query motion clip from the large motion clip database. In this work, a novel sketching interface for defining the query is presented. This simple approach allows users to define the required motion by sketching several motion strokes over a drawn character, which requires less effort and extends the users’ expressiveness. To support the real-time interface, a specialized encoding of the motions and the hand-drawn query is required. Here, we introduce a novel hierarchical encoding scheme based on a set of orthonormal spherical harmonic (SH) basis functions, which provides a compact representation, and avoids the CPU/processing intensive stage of temporal alignment used by previous solutions. Experimental results show that the proposed approach can well retrieve the motions, and is capable of retrieve logically and numerically similar motions, which is superior to previous approaches. The user study shows that the proposed system can be a useful tool to input motion query if the users are familiar with it. Finally, an application of generating a 3D animation from a hand-drawn comics strip is demonstrated.
Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko
2008-01-01
Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2017-05-01
Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.
Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J. J.; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine
2017-01-01
The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer’s motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex. PMID:28861024
Perception of Social Interactions for Spatially Scrambled Biological Motion
Thurman, Steven M.; Lu, Hongjing
2014-01-01
It is vitally important for humans to detect living creatures in the environment and to analyze their behavior to facilitate action understanding and high-level social inference. The current study employed naturalistic point-light animations to examine the ability of human observers to spontaneously identify and discriminate socially interactive behaviors between two human agents. Specifically, we investigated the importance of global body form, intrinsic joint movements, extrinsic whole-body movements, and critically, the congruency between intrinsic and extrinsic motions. Motion congruency is hypothesized to be particularly important because of the constraint it imposes on naturalistic action due to the inherent causal relationship between limb movements and whole body motion. Using a free response paradigm in Experiment 1, we discovered that many naïve observers (55%) spontaneously attributed animate and/or social traits to spatially-scrambled displays of interpersonal interaction. Total stimulus motion energy was strongly correlated with the likelihood that an observer would attribute animate/social traits, as opposed to physical/mechanical traits, to the scrambled dot stimuli. In Experiment 2, we found that participants could identify interactions between spatially-scrambled displays of human dance as long as congruency was maintained between intrinsic/extrinsic movements. Violating the motion congruency constraint resulted in chance discrimination performance for the spatially-scrambled displays. Finally, Experiment 3 showed that scrambled point-light dancing animations violating this constraint were also rated as significantly less interactive than animations with congruent intrinsic/extrinsic motion. These results demonstrate the importance of intrinsic/extrinsic motion congruency for biological motion analysis, and support a theoretical framework in which early visual filters help to detect animate agents in the environment based on several fundamental constraints. Only after satisfying these basic constraints could stimuli be evaluated for high-level social content. In this way, we posit that perceptual animacy may serve as a gateway to higher-level processes that support action understanding and social inference. PMID:25406075
Reconstructing 3-D skin surface motion for the DIET breast cancer screening system.
Botterill, Tom; Lotz, Thomas; Kashif, Amer; Chase, J Geoffrey
2014-05-01
Digital image-based elasto-tomography (DIET) is a prototype system for breast cancer screening. A breast is imaged while being vibrated, and the observed surface motion is used to infer the internal stiffness of the breast, hence identifying tumors. This paper describes a computer vision system for accurately measuring 3-D surface motion. A model-based segmentation is used to identify the profile of the breast in each image, and the 3-D surface is reconstructed by fitting a model to the profiles. The surface motion is measured using a modern optical flow implementation customized to the application, then trajectories of points on the 3-D surface are given by fusing the optical flow with the reconstructed surfaces. On data from human trials, the system is shown to exceed the performance of an earlier marker-based system at tracking skin surface motion. We demonstrate that the system can detect a 10 mm tumor in a silicone phantom breast.
Oguntosin, Victoria W; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J; Kawamura, Sadao; Hayashi, Yoshikatsu
2017-01-01
We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments.
Oguntosin, Victoria W.; Mori, Yoshiki; Kim, Hyejong; Nasuto, Slawomir J.; Kawamura, Sadao; Hayashi, Yoshikatsu
2017-01-01
We demonstrated the design, production, and functional properties of the Exoskeleton Actuated by the Soft Modules (EAsoftM). Integrating the 3D printed exoskeleton with passive joints to compensate gravity and with active joints to rotate the shoulder and elbow joints resulted in ultra-light system that could assist planar reaching motion by using the vision-based control law. The EAsoftM can support the reaching motion with compliance realized by the soft materials and pneumatic actuation. In addition, the vision-based control law has been proposed for the precise control over the target reaching motion within the millimeter scale. Aiming at rehabilitation exercise for individuals, typically soft actuators have been developed for relatively small motions, such as grasping motion, and one of the challenges has been to extend their use for a wider range reaching motion. The proposed EAsoftM presented one possible solution for this challenge by transmitting the torque effectively along the anatomically aligned with a human body exoskeleton. The proposed integrated systems will be an ideal solution for neurorehabilitation where affordable, wearable, and portable systems are required to be customized for individuals with specific motor impairments. PMID:28736514
Dynamics Of Human Motion The Case Study of an Examination Hall
NASA Astrophysics Data System (ADS)
Ogunjo, Samuel; Ajayi, Oluwaseyi; Fuwape, Ibiyinka; Dansu, Emmanuel
Human behaviour is difficult to characterize and generalize due to ITS complex nature. Advances in mathematical models have enabled human systems such as love interaction, alcohol abuse, admission problem to be described using models. This study investigates one of such problems, the dynamics of human motion in an examination hall with limited computer systems such that students write their examination in batches. The examination is characterized by time (t) allocated to each students and difficulty level (dl) associated with the examination. A stochastic model based on the difficulty level of the examination was developed for the prediction of student's motion around the examination hall. A good agreement was obtained between theoretical predictions and numerical simulation. The result obtained will help in better planning of examination session to maximize available resources. Furthermore, results obtained in the research can be extended to other areas such as banking hall, customer service points where available resources will be shared amongst many users.
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Manifolds for pose tracking from monocular video
NASA Astrophysics Data System (ADS)
Basu, Saurav; Poulin, Joshua; Acton, Scott T.
2015-03-01
We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
Development of biomechanical models for human factors evaluations
NASA Technical Reports Server (NTRS)
Woolford, Barbara; Pandya, Abhilash; Maida, James
1991-01-01
Previewing human capabilities in a computer-aided engineering mode has assisted greatly in planning well-designed systems without the cost and time involved in mockups and engineering models. To date, the computer models have focused on such variables as field of view, accessibility and fit, and reach envelopes. Program outputs have matured from simple static pictures to animations viewable from any eyepoint. However, while kinematics models are available, there are few biomechanical models available for estimating strength and motion patterns. Those, such as Crew Chief, that are available are based on strength measurements taken in specific positions. Johnson Space Center is pursuing a biomechanical model which will use strength data collected on single joints at two or three velocities to attempt to predict compound motions of several joint simultaneously and the resulting force at the end effector. Two lines of research are coming together to produce this result. One is an attempt to use optimal control theory to predict joint motion in complex motions, and another is the development of graphical representation of human capabilities. The progress to date in this research is described.
Technology evaluation of man-rated acceleration test equipment for vestibular research
NASA Technical Reports Server (NTRS)
Taback, I.; Kenimer, R. L.; Butterfield, A. J.
1983-01-01
The considerations for eliminating acceleration noise cues in horizontal, linear, cyclic-motion sleds intended for both ground and shuttle-flight applications are addressed. the principal concerns are the acceleration transients associated with change in direction-of-motion for the carriage. The study presents a design limit for acceleration cues or transients based upon published measurements for thresholds of human perception to linear cyclic motion. The sources and levels for motion transients are presented based upon measurements obtained from existing sled systems. The approaches to a noise-free system recommends the use of air bearings for the carriage support and moving-coil linear induction motors operating at low frequency as the drive system. Metal belts running on air bearing pulleys provide an alternate approach to the driving system. The appendix presents a discussion of alternate testing techniques intended to provide preliminary type data by means of pendulums, linear motion devices and commercial air bearing tables.
Autogenic-Feedback Training for the Control of Space Motion Sickness
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Toscano, W. B.
1994-01-01
This paper presents case-studies of 9 shuttle crewmembers (prime and alternates) and one U.S. Navy F-18 pilot, as they participated in all preflight training and testing activities in support of a life sciences flight experiment aboard Spacelab-J, and Spacelab-3. The primary objective of the flight experiment was to determine if Autogenic-feedback training (AFT), a physiological self-regulation training technique would be an effective treatment for motion sickness and space motion sickness in these crewmembers. Additional objectives of this study involved the examining human physiological responses to motion sickness on Earth and in space, as well as developing predictive criteria for susceptibility to space motion sickness based on ground-based data. Comparisons of these crewmembers are made to a larger set of subjects from previous experiments (treatment and "test-only" controls subjects). This paper describes all preflight methods, results and proposed changes for future tests.
Effects of Autonomic Conditioning on Motion Sickness Tolerance
NASA Technical Reports Server (NTRS)
Cowings, P. S.; Toscano, W. B.
1994-01-01
This paper presents case-studies of 9 shuttle crewmembers (prime and alternates) and one U.S. Navy F-18 pilot, as they participated in all preflight training and testing activities in support of a life sciences flight experiment aboard Spacelab-J, and Spacelab-3. The primary objective of the flight experiment was to determine if Autogenic-feedback training (AFT), a physiological self-regulation training technique would be an effective treatment for motion sickness and space motion sickness in these crewmembers. Additional objectives of this study involved the examining human Physiological- responses to motion sickness on Earth and in space, as well as developing predictive criteria for susceptibility to space motion sickness based on ground-based data. Comparisons of these crewmembers are made to a larger set of subjects from previous experiments (treatment and test-only controls subjects). This paper describes all preflight methods, results and proposed changes for future tests.
Basins of attraction in human balance
NASA Astrophysics Data System (ADS)
Smith, Victoria A.; Lockhart, Thurmon E.; Spano, Mark L.
2017-12-01
Falls are a recognized risk factor for unintentional injuries among older adults, accounting for a large proportion of fractures, emergency department visits, and urgent hospitalizations. Human balance and gait research traditionally uses linear or qualitative tests to assess and describe human motion; however, human motion is neither a simple nor a linear process. The objective of this research is to identify and to learn more about what factors affect balance using nonlinear dynamical techniques, such as basin boundaries. Human balance data was collected using dual force plates for leans using only ankle movements as well as for unrestricted leans. Algorithms to describe the basin boundary were created and compared based on how well each method encloses the experimental data points as well as captures the differences between the two leaning conditions.
Highly Sensitive Flexible Human Motion Sensor Based on ZnSnO3/PVDF Composite
NASA Astrophysics Data System (ADS)
Yang, Young Jin; Aziz, Shahid; Mehdi, Syed Murtuza; Sajid, Memoon; Jagadeesan, Srikanth; Choi, Kyung Hyun
2017-07-01
A highly sensitive body motion sensor has been fabricated based on a composite active layer of zinc stannate (ZnSnO3) nano-cubes and poly(vinylidene fluoride) (PVDF) polymer. The thin film-based active layer was deposited on polyethylene terephthalate flexible substrate through D-bar coating technique. Electrical and morphological characterizations of the films and sensors were carried out to discover the physical characteristics and the output response of the devices. The synergistic effect between piezoelectric ZnSnO3 nanocubes and β phase PVDF provides the composite with a desirable electrical conductivity, remarkable bend sensitivity, and excellent stability, ideal for the fabrication of a motion sensor. The recorded resistance of the sensor towards the bending angles of -150° to 0° to 150° changed from 20 MΩ to 55 MΩ to 100 MΩ, respectively, showing the composite to be a very good candidate for motion sensing applications.
Towards Wearable A-Mode Ultrasound Sensing for Real-Time Finger Motion Recognition.
Yang, Xingchen; Sun, Xueli; Zhou, Dalin; Li, Yuefeng; Liu, Honghai
2018-06-01
It is evident that surface electromyography (sEMG) based human-machine interfaces (HMI) have inherent difficulty in predicting dexterous musculoskeletal movements such as finger motions. This paper is an attempt to investigate a plausible alternative to sEMG, ultrasound-driven HMI, for dexterous motion recognition due to its characteristic of detecting morphological changes of deep muscles and tendons. A multi-channel A-mode ultrasound lightweight device is adopted to evaluate the performance of finger motion recognition; an experiment is designed for both widely acceptable offline and online algorithms with eight able-bodied subjects employed. The experiment result presents that the offline recognition accuracy is up to 98.83% ± 0.79%. The real-time motion completion rate is 95.4% ± 8.7% and online motion selection time is 0.243 ± 0.127 s. The outcomes confirm the feasibility of A-mode ultrasound based wearable HMI and its prosperous applications in prosthetic devices, virtual reality, and remote manipulation.
Samba: a real-time motion capture system using wireless camera sensor networks.
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-03-20
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.
Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-01-01
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618
Design and simulation of a cable-pulley-based transmission for artificial ankle joints
NASA Astrophysics Data System (ADS)
Liu, Huaxin; Ceccarelli, Marco; Huang, Qiang
2016-06-01
In this paper, a mechanical transmission based on cable pulley is proposed for human-like actuation in the artificial ankle joints of human-scale. The anatomy articular characteristics of the human ankle is discussed for proper biomimetic inspiration in designing an accurate, efficient, and robust motion control of artificial ankle joint devices. The design procedure is presented through the inclusion of conceptual considerations and design details for an interactive solution of the transmission system. A mechanical design is elaborated for the ankle joint angular with pitch motion. A multi-body dynamic simulation model is elaborated accordingly and evaluated numerically in the ADAMS environment. Results of the numerical simulations are discussed to evaluate the dynamic performance of the proposed design solution and to investigate the feasibility of the proposed design in future applications for humanoid robots.
Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.
2016-01-01
Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276
Integration of time as a factor in ergonomic simulation.
Walther, Mario; Muñoz, Begoña Toledo
2012-01-01
The paper describes the application of a simulation based ergonomic evaluation. Within a pilot project, the algorithms of the screening method of the European Assembly Worksheet were transferred into an existing digital human model. Movement data was recorded with an especially developed hybrid Motion Capturing system. A prototype of the system was built and is currently being tested at the Volkswagen Group. First results showed the feasibility of the simulation based ergonomic evaluation with Motion Capturing.
Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration.
Lasota, Przemyslaw A; Shah, Julie A
2015-02-01
The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human-robot interaction. We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human-robot team fluency and human worker satisfaction. Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human-robot collaboration.
Evaluation of the leap motion controller as a new contact-free pointing device.
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2014-12-24
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.
Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2015-01-01
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043
A review of flight simulation techniques
NASA Astrophysics Data System (ADS)
Baarspul, Max
After a brief historical review of the evolution of flight simulation techniques, this paper first deals with the main areas of flight simulator applications. Next, it describes the main components of a piloted flight simulator. Because of the presence of the pilot-in-the-loop, the digital computer driving the simulator must solve the aircraft equations of motion in ‘real-time’. Solutions to meet the high required computer power of todays modern flight simulator are elaborated. The physical similarity between aircraft and simulator in cockpit layout, flight instruments, flying controls etc., is discussed, based on the equipment and environmental cue fidelity required for training and research simulators. Visual systems play an increasingly important role in piloted flight simulation. The visual systems now available and most widely used are described, where image generators and display devices will be distinguished. The characteristics of out-of-the-window visual simulation systems pertaining to the perceptual capabilities of human vision are discussed. Faithful reproduction of aircraft motion requires large travel, velocity and acceleration capabilities of the motion system. Different types and applications of motion systems in e.g. airline training and research are described. The principles of motion cue generation, based on the characteristics of the non-visual human motion sensors, are described. The complete motion system, consisting of the hardware and the motion drive software, is discussed. The principles of mathematical modelling of the aerodynamic, flight control, propulsion, landing gear and environmental characteristics of the aircraft are reviewed. An example of the identification of an aircraft mathematical model, based on flight and taxi tests, is presented. Finally, the paper deals with the hardware and software integration of the flight simulator components and the testing and acceptance of the complete flight simulator. Examples of the so-called ‘Computer Generated Checkout’ and ‘Proof of Match’ are presented. The concluding remarks briefly summarize the status of flight simulator technology and consider possibilities for future research.
Le Ruyet, Anicet; Berthet, Fabien; Rongiéras, Frédéric; Beillas, Philippe
2016-11-01
A protocol based on ultrafast ultrasound imaging was applied to study the in situ motion of the liver while the abdomen was subjected to compressive loading at 3 m/s by a hemispherical impactor or a seatbelt. The loading was applied to various locations between the lower abdomen and the mid thorax while feature points inside the liver were followed on the ultrasound movie (2000 frames per second). Based on tests performed on five post mortem human surrogates (including four tested in the current study), trends were found between the loading location and feature point trajectory parameters such as the initial angle of motion or the peak displacement in the direction of impact. The impactor tests were then simulated using the GHBMC M50 human body model that was globally scaled to the dimensions of each surrogate. Some of the experimental trends observed could be reproduced in the simulations (e.g. initial angle) while others differed more widely (e.g. final caudal motion). The causes for the discrepancies need to be further investigated. The liver strain energy density predicted by the model was also widely affected by the impact location. Experimental and simulation results both highlight the importance of the liver position with respect to the impactor when studying its response in situ.
Ubiquitous human upper-limb motion estimation using wearable sensors.
Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang
2011-07-01
Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.
Human movement analysis with image processing in real time
NASA Astrophysics Data System (ADS)
Fauvet, Eric; Paindavoine, Michel; Cannard, F.
1991-04-01
In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.
Modeling repetitive motions using structured light.
Xu, Yi; Aliaga, Daniel G
2010-01-01
Obtaining models of dynamic 3D objects is an important part of content generation for computer graphics. Numerous methods have been extended from static scenarios to model dynamic scenes. If the states or poses of the dynamic object repeat often during a sequence (but not necessarily periodically), we call such a repetitive motion. There are many objects, such as toys, machines, and humans, undergoing repetitive motions. Our key observation is that when a motion-state repeats, we can sample the scene under the same motion state again but using a different set of parameters; thus, providing more information of each motion state. This enables robustly acquiring dense 3D information difficult for objects with repetitive motions using only simple hardware. After the motion sequence, we group temporally disjoint observations of the same motion state together and produce a smooth space-time reconstruction of the scene. Effectively, the dynamic scene modeling problem is converted to a series of static scene reconstructions, which are easier to tackle. The varying sampling parameters can be, for example, structured-light patterns, illumination directions, and viewpoints resulting in different modeling techniques. Based on this observation, we present an image-based motion-state framework and demonstrate our paradigm using either a synchronized or an unsynchronized structured-light acquisition method.
Human Guidance Behavior Decomposition and Modeling
NASA Astrophysics Data System (ADS)
Feit, Andrew James
Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.
Saving and Reproduction of Human Motion Data by Using Haptic Devices with Different Configurations
NASA Astrophysics Data System (ADS)
Tsunashima, Noboru; Yokokura, Yuki; Katsura, Seiichiro
Recently, there has been increased focus on “haptic recording” development of a motion-copying system is an efficient method for the realization of haptic recording. Haptic recording involves saving and reproduction of human motion data on the basis of haptic information. To increase the number of applications of the motion-copying system in various fields, it is necessary to reproduce human motion data by using haptic devices with different configurations. In this study, a method for the above-mentioned haptic recording is developed. In this method, human motion data are saved and reproduced on the basis of work space information, which is obtained by coordinate transformation of motor space information. The validity of the proposed method is demonstrated by experiments. With the proposed method, saving and reproduction of human motion data by using various devices is achieved. Furthermore, it is also possible to use haptic recording in various fields.
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception of human locomotion. Experiment 1 shows that human newborns prefer a point-light walker display representing human locomotion as if on a treadmill over random motion. However, no preference for biological movement is observed in Experiment 2 when both biological and random motion displays are presented with translational displacement. Experiments 3 and 4 show that newborns exhibit preference for translated biological motion (Experiment 3) and random motion (Experiment 4) displays over the same configurations moving without translation. These findings reveal that human newborns have a preference for the translational component of movement independently of the presence of biological kinematics. The outcome suggests that translation constitutes the first step in development of visual preference for biological motion. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Gold Standard Testing of Motion Based Tracking Systems
2017-03-15
NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER H0L0 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER 9...LABORATORY 711TH HUMAN PERFORMANCE WING, AIRMAN SYSTEMS DIRECTORATE, WRIGHT-PATTERSON AIR FORCE BASE, OH 45433 AIR FORCE MATERIEL COMMAND UNITED STATES AIR...711th Human Performance Wing 711th Human Performance Wing Air Force Research Laboratory Air Force Research Laboratory This report is published in the
NASA Astrophysics Data System (ADS)
Könik, Arda; Connolly, Caitlin M.; Johnson, Karen L.; Dasari, Paul; Segars, Paul W.; Pretorius, P. H.; Lindsay, Clifford; Dey, Joyoni; King, Michael A.
2014-07-01
The development of methods for correcting patient motion in emission tomography has been receiving increased attention. Often the performance of these methods is evaluated through simulations using digital anthropomorphic phantoms, such as the commonly used extended cardiac torso (XCAT) phantom, which models both respiratory and cardiac motion based on human studies. However, non-rigid body motion, which is frequently seen in clinical studies, is not present in the standard XCAT phantom. In addition, respiratory motion in the standard phantom is limited to a single generic trend. In this work, to obtain a more realistic representation of motion, we developed a series of individual-specific XCAT phantoms, modeling non-rigid respiratory and non-rigid body motions derived from the magnetic resonance imaging (MRI) acquisitions of volunteers. Acquisitions were performed in the sagittal orientation using the Navigator methodology. Baseline (no motion) acquisitions at end-expiration were obtained at the beginning of each imaging session for each volunteer. For the body motion studies, MRI was again acquired only at end-expiration for five body motion poses (shoulder stretch, shoulder twist, lateral bend, side roll, and axial slide). For the respiratory motion studies, an MRI was acquired during free/regular breathing. The magnetic resonance slices were then retrospectively sorted into 14 amplitude-binned respiratory states, end-expiration, end-inspiration, six intermediary states during inspiration, and six during expiration using the recorded Navigator signal. XCAT phantoms were then generated based on these MRI data by interactive alignment of the organ contours of the XCAT with the MRI slices using a graphical user interface. Thus far we have created five body motion and five respiratory motion XCAT phantoms from the MRI acquisitions of six healthy volunteers (three males and three females). Non-rigid motion exhibited by the volunteers was reflected in both respiratory and body motion phantoms with a varying extent and character for each individual. In addition to these phantoms, we recorded the position of markers placed on the chest of the volunteers for the body motion studies, which could be used as external motion measurement. Using these phantoms and external motion data, investigators will be able to test their motion correction approaches for realistic motion obtained from different individuals. The non-uniform rational B-spline data and the parameter files for these phantoms are freely available for downloading and can be used with the XCAT license.
A GPU-accelerated cortical neural network model for visually guided robot navigation.
Beyeler, Michael; Oros, Nicolas; Dutt, Nikil; Krichmar, Jeffrey L
2015-12-01
Humans and other terrestrial animals use vision to traverse novel cluttered environments with apparent ease. On one hand, although much is known about the behavioral dynamics of steering in humans, it remains unclear how relevant perceptual variables might be represented in the brain. On the other hand, although a wealth of data exists about the neural circuitry that is concerned with the perception of self-motion variables such as the current direction of travel, little research has been devoted to investigating how this neural circuitry may relate to active steering control. Here we present a cortical neural network model for visually guided navigation that has been embodied on a physical robot exploring a real-world environment. The model includes a rate based motion energy model for area V1, and a spiking neural network model for cortical area MT. The model generates a cortical representation of optic flow, determines the position of objects based on motion discontinuities, and combines these signals with the representation of a goal location to produce motor commands that successfully steer the robot around obstacles toward the goal. The model produces robot trajectories that closely match human behavioral data. This study demonstrates how neural signals in a model of cortical area MT might provide sufficient motion information to steer a physical robot on human-like paths around obstacles in a real-world environment, and exemplifies the importance of embodiment, as behavior is deeply coupled not only with the underlying model of brain function, but also with the anatomical constraints of the physical body it controls. Copyright © 2015 Elsevier Ltd. All rights reserved.
Technical skills measurement based on a cyber-physical system for endovascular surgery simulation.
Tercero, Carlos; Kodama, Hirokatsu; Shi, Chaoyang; Ooe, Katsutoshi; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto; Kwon, Guiryong; Najdovski, Zoran
2013-09-01
Quantification of medical skills is a challenge, particularly simulator-based training. In the case of endovascular intervention, it is desirable that a simulator accurately recreates the morphology and mechanical characteristics of the vasculature while enabling scoring. For this purpose, we propose a cyber-physical system composed of optical sensors for a catheter's body motion encoding, a magnetic tracker for motion capture of an operator's hands, and opto-mechatronic sensors for measuring the interaction of the catheter tip with the vasculature model wall. Two pilot studies were conducted for measuring technical skills, one for distinguishing novices from experts and the other for measuring unnecessary motion. The proficiency levels were measurable between expert and novice and also between individual novice users. The results enabled scoring of the user's proficiency level, using sensitivity, reaction time, time to complete a task and respect for tissue integrity as evaluation criteria. Additionally, unnecessary motion was also measurable. The development of cyber-physical simulators for other domains of medicine depend on the study of photoelastic materials for human tissue modelling, and enables quantitative evaluation of skills using surgical instruments and a realistic representation of human tissue. Copyright © 2012 John Wiley & Sons, Ltd.
Speed Biases With Real-Life Video Clips
Rossi, Federica; Montanaro, Elisa; de’Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing. PMID:29615875
Speed Biases With Real-Life Video Clips.
Rossi, Federica; Montanaro, Elisa; de'Sperati, Claudio
2018-01-01
We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion), speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion) to 32% (physical motion). Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate "natural" video compression techniques based on sub-threshold temporal squeezing.
Correction for human head motion in helical x-ray CT
NASA Astrophysics Data System (ADS)
Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.
2016-02-01
Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.
NASA Astrophysics Data System (ADS)
Xie, Longhan; Li, Xiaodong; Cai, Siqi; Huang, Ledeng; Li, Jiehong
2017-11-01
In recent years, there has been increasing demand for portable power sources because of the rapid development of portable and wearable electronic devices. This paper describes the development of a backpack-based energy harvester to harness the biomechanical energy of the human body during walking. The energy harvester was embedded into a backpack and used a spring-mass-damping system to transfer the energetic motion of the human body into rotary generators to produce electricity. In the oscillation system, the weight of the harvester itself and the load contained in the backpack serve together as the seismic mass; when excited by human trunk motion, the seismic mass drives a gear train to accelerate the harvested energetic motion, which is then delivered to a generator. A prototype device was built to investigate its performance, which has a maximum diameter of 50 mm, a minimum diameter of 28 mm, a length of 250 mm, and a weight of 380 g. Experiments showed that the proposed backpack-based harvester, when operating with a 5 kg load, could produce approximately 7 W of electrical power at a walking velocity of 5.5 km/h. The normalized power density of the harvester is 0.145 kg/cm3, which is 7.6 times as much as that of Rome's backpack harvester [26]. Based on the results of metabolic cost experiments, the average conversion efficiency from human metabolic power to electrical power is approximately 36%.
Spering, Miriam; Carrasco, Marisa
2012-01-01
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238
Spering, Miriam; Carrasco, Marisa
2012-05-30
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.
Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M.
Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less
Development of a CPM Machine for Injured Fingers.
Fu, Yili; Zhang, Fuxiang; Ma, Xin; Meng, Qinggang
2005-01-01
Human fingers are easy to be injured. A CPM machine is a mechanism based on the rehabilitation theory of continuous passive motion (CPM). To develop a CPM machine for the clinic application in the rehabilitation of injured fingers is a significant task. Therefore, based on the theories of evidence based medicine (EBM) and CPM, we've developed a set of biomimetic mechanism after modeling the motions of fingers and analyzing its kinematics and dynamics analysis. We also design an embedded operating system based on ARM (a kind of 32-bit RISC microprocessor). The equipment can achieve the precise control of moving scope of fingers, finger's force and speed. It can serves as a rational checking method and a way of assessment for functional rehabilitation of human hands. Now, the first prototype has been finished and will start the clinical testing in Harbin Medical University shortly.
Ultrasonic Methods for Human Motion Detection
2006-10-01
contacts. The active method utilizes continuous wave ultrasonic Doppler sonar . Human motions have unique Doppler signatures and their combination...The present article reports results of human motion investigations with help of CW ultrasonic Doppler sonar . Low-cost, low-power ultrasonic motion...have been developed for operation in air [10]. Benefits of using ultrasonic CW Doppler sonar included the low-cost, low-electric noise, small size
Web-based tools for modelling and analysis of multivariate data: California ozone pollution activity
Dinov, Ivo D.; Christou, Nicolas
2014-01-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges. PMID:24465054
Dinov, Ivo D; Christou, Nicolas
2011-09-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges.
Neural Representation of Motion-In-Depth in Area MT
Sanada, Takahisa M.
2014-01-01
Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481
Neuroanatomical correlates of biological motion detection.
Gilaie-Dotan, Sharon; Kanai, Ryota; Bahrami, Bahador; Rees, Geraint; Saygin, Ayse P
2013-02-01
Biological motion detection is both commonplace and important, but there is great inter-individual variability in this ability, the neural basis of which is currently unknown. Here we examined whether the behavioral variability in biological motion detection is reflected in brain anatomy. Perceptual thresholds for detection of biological motion and control conditions (non-biological object motion detection and motion coherence) were determined in a group of healthy human adults (n=31) together with structural magnetic resonance images of the brain. Voxel based morphometry analyzes revealed that gray matter volumes of left posterior superior temporal sulcus (pSTS) and left ventral premotor cortex (vPMC) significantly predicted individual differences in biological motion detection, but showed no significant relationship with performance on the control tasks. Our study reveals a neural basis associated with the inter-individual variability in biological motion detection, reliably linking the neuroanatomical structure of left pSTS and vPMC with biological motion detection performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
Assessment method of digital Chinese dance movements based on virtual reality technology
NASA Astrophysics Data System (ADS)
Feng, Wei; Shao, Shuyuan; Wang, Shumin
2008-03-01
Virtual reality has played an increasing role in such areas as medicine, architecture, aviation, engineering science and advertising. However, in the art fields, virtual reality is still in its infancy in the representation of human movements. Based on the techniques of motion capture and reuse of motion capture data in virtual reality environment, this paper presents an assessment method in order to evaluate the quantification of dancers' basic Arm Position movements in Chinese traditional dance. In this paper, the data for quantifying traits of dance motions are defined and measured on dancing which performed by an expert and two beginners, with results indicating that they are beneficial for evaluating dance skills and distinctiveness, and the assessment method of digital Chinese dance movements based on virtual reality technology is validity and feasibility.
All-fabric-based wearable self-charging power cloth
NASA Astrophysics Data System (ADS)
Song, Yu; Zhang, Jinxin; Guo, Hang; Chen, Xuexian; Su, Zongming; Chen, Haotian; Cheng, Xiaoliang; Zhang, Haixia
2017-08-01
We present an all-fabric-based self-charging power cloth (SCPC), which integrates a fabric-based single-electrode triboelectric generator (STEG) and a flexible supercapacitor. To effectively scavenge mechanical energy from the human motion, the STEG could be directly woven among the cloth, exhibiting excellent output capability. Meanwhile, taking advantage of fabric structures with a large surface-area and carbon nanotubes with high conductivity, the wearable supercapacitor exhibits high areal capacitance (16.76 mF/cm2) and stable cycling performance. With the fabric configuration and the aim of simultaneously collecting body motion energy by STEG and storing in supercapacitors, such SCPC could be easily integrated with textiles and charged to nearly 100 mV during the running motion within 6 min, showing great potential in self-powered wearable electronics and smart cloths.
Microgravity Investigation of Crew Reactions in 0-G (MICRO-G)
NASA Technical Reports Server (NTRS)
Newman, Dava; Coleman, Charles; Metaxas, Dimitri
2004-01-01
There is a need for a human factors, technology-based bioastronautics research effort to develop an integrated system that reduces risk and provides scientific knowledge of astronaut-induced loads and motions during long-duration missions on the International Space Station (ISS), which will lead to appropriate countermeasures. The primary objectives of the Microgravity Investigation of Crew Reactions in 0-G (MICRO-GI research effort are to quantify astronaut adaptation and movement as well as to model motor strategies for differing gravity environments. The overall goal of this research program is to improve astronaut performance and efficiency through the use of rigorous quantitative dynamic analysis, simulation and experimentation. The MICRO-G research effort provides a modular, kinetic and kinematic capability for the ISS. The collection and evaluation of kinematics (whole-body motion) and dynamics (reacting forces and torques) of astronauts within the ISS will allow for quantification of human motion and performance in weightlessness, gathering fundamental human factors information for design, scientific investigation in the field of dynamics and motor control, technological assessment of microgravity disturbances, and the design of miniaturized, real-time space systems. The proposed research effort builds on a strong foundation of successful microgravity experiments, namely, the EDLS (Enhanced Dynamics Load Sensors) flown aboard the Russian Mir space station (19961998) and the DLS (Dynamic Load Sensors) flown on Space Shuttle Mission STS-62. In addition, previously funded NASA ground-based research into sensor technology development and development of algorithms to produce three-dimensional (3-0) kinematics from video images have come to fruition and these efforts culminate in the proposed collaborative MICRO-G flight experiment. The required technology and hardware capitalize on previous sensor design, fabrication, and testing and can be flight qualified for a fraction of the cost of an initial spaceflight experiment. Four dynamic load sensors/restraints are envisioned for measurement of astronaut forces and torques. Two standard ISS video cameras record typical astronaut operations and prescribed IVA motions for 3-D kinematics. Forces and kinematics are combined for dynamic analysis of astronaut motion, exploiting the results of the detailed dynamic modeling effort for the quantitative verification of astronaut IVA performance, induced-loads, and adaptive control strategies for crewmember whole-body motion in microgravity. This comprehensive effort, provides an enhanced human factors approach based on physics-based modeling to identify adaptive performance during long-duration spaceflight, which is critically important for astronaut training as well as providing a spaceflight database to drive countermeasure design.
The Unrealised Value of Human Motion--"Moving Back to Movement!"
ERIC Educational Resources Information Center
Dodd, Graham D.
2015-01-01
The unrealised and under-estimated value of human motion in human development, functioning and learning is the central cause for its devaluation in Australian society. This paper provides a greater insight into why human motion has high value and should be utilised more in advocacy and implementation in health and education, particularly school…
IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion.
Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar
2017-11-27
The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.
NASA Astrophysics Data System (ADS)
Wang, Yin Jie; Chen, Chao Ting; Chen, Jiun Jung; Yeh, Sou Peng; Wu, Wen Jong
2015-03-01
To harvest energy from human motion and generate power for the emerging wearable devices, energy harvesters are required to work at very low frequency. There are several studies based on energy harvesting through human gait, which can generate significant power. However, when wearing these kind of devices, additional effort may be required and the user may feel uncomfortable when moving. The energy harvester developed here is composed of a 10 μm PZT thin-film deposited on 50 μm thick stainless steel foil by the aerosol deposition method. The PZT layer and the stainless steel foil are both very thin, thus the patch is highly flexible. The patch can be attached on the skin to harvester power through human motions such as the expansion of the chest region while breathing. The energy harvester will first be tested with a moving stage for power output measurements. The energy density can be determined for different deformation ranges and frequencies. The fabrication processes and testing results will all be detailed in this paper.
Keller, Sune H; Sibomana, Merence; Olesen, Oline V; Svarer, Claus; Holm, Søren; Andersen, Flemming L; Højgaard, Liselotte
2012-03-01
Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Two scans with minor motion and 5 with major motion (as reported by the optical motion tracking system) were selected from (18)F-FDG scans acquired on a PET scanner. The motion was measured as the maximum displacement of the markers attached to the subject's head and was considered to be major if larger than 4 mm and minor if less than 2 mm. After allowing a 40- to 60-min uptake time after tracer injection, we acquired a 6-min transmission scan, followed by a 40-min emission list-mode scan. Each emission list-mode dataset was divided into 8 frames of 5 min. The reconstructed time-framed images were aligned to a selected reference frame using either EMT or the AIR (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. The results of the 3 QC methods were in agreement with one another and with a visual subjective inspection of the image data. Before MC, the QC method measures varied significantly in scans with major motion and displayed limited variations on scans with minor motion. The variation was significantly reduced and measures improved after MC with AIR, whereas EMT MC performed less well. The 3 presented QC methods produced similar results and are useful for evaluating tracer-independent external-tracking motion-correction methods for human brain scans.
The Complex Action Recognition via the Correlated Topic Model
Tu, Hong-bin; Xia, Li-min; Wang, Zheng-wu
2014-01-01
Human complex action recognition is an important research area of the action recognition. Among various obstacles to human complex action recognition, one of the most challenging is to deal with self-occlusion, where one body part occludes another one. This paper presents a new method of human complex action recognition, which is based on optical flow and correlated topic model (CTM). Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms of an occlusion state variable. Secondly, the structure from motion (SFM) is used for reconstructing the missing data of point trajectories. Then, we can extract the key frame based on motion feature from optical flow and the ratios of the width and height are extracted by the human silhouette. Finally, we use the topic model of correlated topic model (CTM) to classify action. Experiments were performed on the KTH, Weizmann, and UIUC action dataset to test and evaluate the proposed method. The compared experiment results showed that the proposed method was more effective than compared methods. PMID:24574920
Gait recognition based on Gabor wavelets and modified gait energy image for human identification
NASA Astrophysics Data System (ADS)
Huang, Deng-Yuan; Lin, Ta-Wei; Hu, Wu-Chih; Cheng, Chih-Hsiang
2013-10-01
This paper proposes a method for recognizing human identity using gait features based on Gabor wavelets and modified gait energy images (GEIs). Identity recognition by gait generally involves gait representation, extraction, and classification. In this work, a modified GEI convolved with an ensemble of Gabor wavelets is proposed as a gait feature. Principal component analysis is then used to project the Gabor-wavelet-based gait features into a lower-dimension feature space for subsequent classification. Finally, support vector machine classifiers based on a radial basis function kernel are trained and utilized to recognize human identity. The major contributions of this paper are as follows: (1) the consideration of the shadow effect to yield a more complete segmentation of gait silhouettes; (2) the utilization of motion estimation to track people when walkers overlap; and (3) the derivation of modified GEIs to extract more useful gait information. Extensive performance evaluation shows a great improvement of recognition accuracy due to the use of shadow removal, motion estimation, and gait representation using the modified GEIs and Gabor wavelets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry
This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics naturalmore » human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.« less
NASA Astrophysics Data System (ADS)
Barki, Anum; Kendricks, Kimberly; Tuttle, Ronald F.; Bunker, David J.; Borel, Christoph C.
2013-05-01
This research highlights the results obtained from applying the method of inverse kinematics, using Groebner basis theory, to the human gait cycle to extract and identify lower extremity gait signatures. The increased threat from suicide bombers and the force protection issues of today have motivated a team at Air Force Institute of Technology (AFIT) to research pattern recognition in the human gait cycle. The purpose of this research is to identify gait signatures of human subjects and distinguish between subjects carrying a load to those subjects without a load. These signatures were investigated via a model of the lower extremities based on motion capture observations, in particular, foot placement and the joint angles for subjects affected by carrying extra load on the body. The human gait cycle was captured and analyzed using a developed toolkit consisting of an inverse kinematic motion model of the lower extremity and a graphical user interface. Hip, knee, and ankle angles were analyzed to identify gait angle variance and range of motion. Female subjects exhibited the most knee angle variance and produced a proportional correlation between knee flexion and load carriage.
NASA Astrophysics Data System (ADS)
Okuno, Keisuke; Inamura, Tetsunari
A robotic coaching system can improve humans' learning performance of motions by intelligent usage of emphatic motions and adverbial expressions according to user reactions. In robotics, however, method to control both the motions and the expressions and how to bind them had not been adequately discussed from an engineering point of view. In this paper, we propose a method for controlling and binding emphatic motions and adverbial expressions by using two scalar parameters in a phase space. In the phase space, variety of motion patterns and verbal expressions are connected and can be expressed as static points. We show the feasibility of the proposing method through experiments of actual sport coaching tasks for beginners. From the results of participants' improvements in motion learning, we confirmed the feasibility of the methods to control and bind emphatic motions and adverbial expressions, as well as confirmed contribution of the emphatic motions and positive correlation of adverbial expressions for participants' improvements in motion learning. Based on the results, we introduce a hypothesis that individually optimized method for binding adverbial expression is required.
Applying Simulated In Vivo Motions to Measure Human Knee and ACL Kinetics
Herfat, Safa T.; Boguszewski, Daniel V.; Shearn, Jason T.
2013-01-01
Patients frequently experience anterior cruciate ligament (ACL) injuries but current ACL reconstruction strategies do not restore the native biomechanics of the knee, which can contribute to the early onset of osteoarthritis in the long term. To design more effective treatments, investigators must first understand normal in vivo knee function for multiple activities of daily living (ADLs). While the 3D kinematics of the human knee have been measured for various ADLs, the 3D kinetics cannot be directly measured in vivo. Alternatively, the 3D kinetics of the knee and its structures can be measured in an animal model by simulating and applying subject-specific in vivo joint motions to a joint using robotics. However, a suitable biomechanical surrogate should first be established. This study was designed to apply a simulated human in vivo motion to human knees to measure the kinetics of the human knee and ACL. In pursuit of establishing a viable biomechanical surrogate, a simulated in vivo ovine motion was also applied to human knees to compare the loads produced by the human and ovine motions. The motions from the two species produced similar kinetics in the human knee and ACL. The only significant difference was the intact knee compression force produced by the two input motions. PMID:22227973
Gated Sensor Fusion: A way to Improve the Precision of Ambulatory Human Body Motion Estimation.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, Gonzalo
2014-01-01
Human body motion is usually variable in terms of intensity and, therefore, any Inertial Measurement Unit attached to a subject will measure both low and high angular rate and accelerations. This can be a problem for the accuracy of orientation estimation algorithms based on adaptive filters such as the Kalman filter, since both the variances of the process noise and the measurement noise are set at the beginning of the algorithm and remain constant during its execution. Setting fixed noise parameters burdens the adaptation capability of the filter if the intensity of the motion changes rapidly. In this work we present a conjoint novel algorithm which uses a motion intensity detector to dynamically vary the noise statistical parameters of different approaches of the Kalman filter. Results show that the precision of the estimated orientation in terms of the RMSE can be improved up to 29% with respect to the standard fixed-parameters approaches.
Sarvi, Majid
2017-01-01
Introduction Understanding collective behavior of moving organisms and how interactions between individuals govern their collective motion has triggered a growing number of studies. Similarities have been observed between the scale-free behavioral aspects of various systems (i.e. groups of fish, ants, and mammals). Investigation of such connections between the collective motion of non-human organisms and that of humans however, has been relatively scarce. The problem demands for particular attention in the context of emergency escape motion for which innovative experimentation with panicking ants has been recently employed as a relatively inexpensive and non-invasive approach. However, little empirical evidence has been provided as to the relevance and reliability of this approach as a model of human behaviour. Methods This study explores pioneer experiments of emergency escape to tackle this question and to connect two forms of experimental observations that investigate the collective movement at macroscopic level. A large number of experiments with human and panicking ants are conducted representing the escape behavior of these systems in crowded spaces. The experiments share similar architectural structures in which two streams of crowd flow merge with one another. Measures such as discharge flow rates and the probability distribution of passage headways are extracted and compared between the two systems. Findings Our findings displayed an unexpected degree of similarity between the collective patterns emerged from both observation types, particularly based on aggregate measures. Experiments with ants and humans commonly indicated how significantly the efficiency of motion and the rate of discharge depend on the architectural design of the movement environment. Practical applications Our findings contribute to the accumulation of evidence needed to identify the boarders of applicability of experimentation with crowds of non-human entities as models of human collective motion as well as the level of measurements (i.e. macroscopic or microscopic) and the type of contexts at which reliable inferences can be drawn. This particularly has implications in the context of experimenting evacuation behaviour for which recruiting human subjects may face ethical restrictions. The findings, at minimum, offer promise as to the potential benefit of piloting such experiments with non-human crowds, thereby forming better-informed hypotheses. PMID:28854221
Shahhoseini, Zahra; Sarvi, Majid
2017-01-01
Understanding collective behavior of moving organisms and how interactions between individuals govern their collective motion has triggered a growing number of studies. Similarities have been observed between the scale-free behavioral aspects of various systems (i.e. groups of fish, ants, and mammals). Investigation of such connections between the collective motion of non-human organisms and that of humans however, has been relatively scarce. The problem demands for particular attention in the context of emergency escape motion for which innovative experimentation with panicking ants has been recently employed as a relatively inexpensive and non-invasive approach. However, little empirical evidence has been provided as to the relevance and reliability of this approach as a model of human behaviour. This study explores pioneer experiments of emergency escape to tackle this question and to connect two forms of experimental observations that investigate the collective movement at macroscopic level. A large number of experiments with human and panicking ants are conducted representing the escape behavior of these systems in crowded spaces. The experiments share similar architectural structures in which two streams of crowd flow merge with one another. Measures such as discharge flow rates and the probability distribution of passage headways are extracted and compared between the two systems. Our findings displayed an unexpected degree of similarity between the collective patterns emerged from both observation types, particularly based on aggregate measures. Experiments with ants and humans commonly indicated how significantly the efficiency of motion and the rate of discharge depend on the architectural design of the movement environment. Our findings contribute to the accumulation of evidence needed to identify the boarders of applicability of experimentation with crowds of non-human entities as models of human collective motion as well as the level of measurements (i.e. macroscopic or microscopic) and the type of contexts at which reliable inferences can be drawn. This particularly has implications in the context of experimenting evacuation behaviour for which recruiting human subjects may face ethical restrictions. The findings, at minimum, offer promise as to the potential benefit of piloting such experiments with non-human crowds, thereby forming better-informed hypotheses.
Kesler, Kyle; Dillon, Neal P; Fichera, Loris; Labadie, Robert F
2017-09-01
Objectives Document human motions associated with cochlear implant electrode insertion at different speeds and determine the lower limit of continuous insertion speed by a human. Study Design Observational. Setting Academic medical center. Subjects and Methods Cochlear implant forceps were coupled to a frame containing reflective fiducials, which enabled optical tracking of the forceps' tip position in real time. Otolaryngologists (n = 14) performed mock electrode insertions at different speeds based on recommendations from the literature: "fast" (96 mm/min), "stable" (as slow as possible without stopping), and "slow" (15 mm/min). For each insertion, the following metrics were calculated from the tracked position data: percentage of time at prescribed speed, percentage of time the surgeon stopped moving forward, and number of direction reversals (ie, going from forward to backward motion). Results Fast insertion trials resulted in better adherence to the prescribed speed (45.4% of the overall time), no motion interruptions, and no reversals, as compared with slow insertions (18.6% of time at prescribed speed, 15.7% stopped time, and an average of 18.6 reversals per trial). These differences were statistically significant for all metrics ( P < .01). The metrics for the fast and stable insertions were comparable; however, stable insertions were performed 44% slower on average. The mean stable insertion speed was 52 ± 19.3 mm/min. Conclusion Results indicate that continuous insertion of a cochlear implant electrode at 15 mm/min is not feasible for human operators. The lower limit of continuous forward insertion is 52 mm/min on average. Guidelines on manual insertion kinematics should consider this practical limit of human motion.
Gertz, Hanna; Hilger, Maximilian; Hegele, Mathias; Fiehler, Katja
2016-09-01
Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception-action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial-frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. Copyright © 2016 Elsevier Inc. All rights reserved.
Self-motion facilitates echo-acoustic orientation in humans
Wallmeier, Ludwig; Wiegrebe, Lutz
2014-01-01
The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory–motor interactions, and on possible optimization strategies underlying echolocation in humans. PMID:26064556
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons
Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana
2018-01-01
The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured. PMID:29527161
Learning by Demonstration for Motion Planning of Upper-Limb Exoskeletons.
Lauretti, Clemente; Cordella, Francesca; Ciancio, Anna Lisa; Trigili, Emilio; Catalan, Jose Maria; Badesa, Francisco Javier; Crea, Simona; Pagliara, Silvio Marcello; Sterzi, Silvia; Vitiello, Nicola; Garcia Aracil, Nicolas; Zollo, Loredana
2018-01-01
The reference joint position of upper-limb exoskeletons is typically obtained by means of Cartesian motion planners and inverse kinematics algorithms with the inverse Jacobian; this approach allows exploiting the available Degrees of Freedom (i.e. DoFs) of the robot kinematic chain to achieve the desired end-effector pose; however, if used to operate non-redundant exoskeletons, it does not ensure that anthropomorphic criteria are satisfied in the whole human-robot workspace. This paper proposes a motion planning system, based on Learning by Demonstration, for upper-limb exoskeletons that allow successfully assisting patients during Activities of Daily Living (ADLs) in unstructured environment, while ensuring that anthropomorphic criteria are satisfied in the whole human-robot workspace. The motion planning system combines Learning by Demonstration with the computation of Dynamic Motion Primitives and machine learning techniques to construct task- and patient-specific joint trajectories based on the learnt trajectories. System validation was carried out in simulation and in a real setting with a 4-DoF upper-limb exoskeleton, a 5-DoF wrist-hand exoskeleton and four patients with Limb Girdle Muscular Dystrophy. Validation was addressed to (i) compare the performance of the proposed motion planning with traditional methods; (ii) assess the generalization capabilities of the proposed method with respect to the environment variability. Three ADLs were chosen to validate the system: drinking, pouring and lifting a light sphere. The achieved results showed a 100% success rate in the task fulfillment, with a high level of generalization with respect to the environment variability. Moreover, an anthropomorphic configuration of the exoskeleton is always ensured.
Biofidelic Human Activity Modeling and Simulation with Large Variability
2014-11-25
A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to
Simulation of Human-induced Vibrations Based on the Characterized In-field Pedestrian Behavior
Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-01-01
For slender and lightweight structures, vibration serviceability is a matter of growing concern, often constituting the critical design requirement. With designs governed by the dynamic performance under human-induced loads, a strong demand exists for the verification and refinement of currently available load models. The present contribution uses a 3D inertial motion tracking technique for the characterization of the in-field pedestrian behavior. The technique is first tested in laboratory experiments with simultaneous registration of the corresponding ground reaction forces. The experiments include walking persons as well as rhythmical human activities such as jumping and bobbing. It is shown that the registered motion allows for the identification of the time variant pacing rate of the activity. Together with the weight of the person and the application of generalized force models available in literature, the identified time-variant pacing rate allows to characterize the human-induced loads. In addition, time synchronization among the wireless motion trackers allows identifying the synchronization rate among the participants. Subsequently, the technique is used on a real footbridge where both the motion of the persons and the induced structural vibrations are registered. It is shown how the characterized in-field pedestrian behavior can be applied to simulate the induced structural response. It is demonstrated that the in situ identified pacing rate and synchronization rate constitute an essential input for the simulation and verification of the human-induced loads. The main potential applications of the proposed methodology are the estimation of human-structure interaction phenomena and the development of suitable models for the correlation among pedestrians in real traffic conditions. PMID:27167309
Macro-motion detection using ultra-wideband impulse radar.
Xin Li; Dengyu Qiao; Ye Li
2014-01-01
Radar has the advantage of being able to detect hidden individuals, which can be used in homeland security, disaster rescue, and healthcare monitoring-related applications. Human macro-motion detection using ultra-wideband impulse radar is studied in this paper. First, a frequency domain analysis is carried out to show that the macro-motion yields a bandpass signal in slow-time. Second, the FTFW (fast-time frequency windowing), which has the advantage of avoiding the measuring range reduction, and the HLF (high-pass linear-phase filter), which can preserve the motion signal effectively, are proposed to preprocess the radar echo. Last, a threshold decision method, based on the energy detector structure, is presented.
Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration
Shah, Julie A.
2015-01-01
Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration. PMID:25790568
Paths of Movement for Selected Body Segments During Typical Pilot Tasks
1976-03-01
11 Scope ........... ...................... 12 Past Human Motion Investigations ........... ... 13 Experimental Techniques in Human...of.literature has been generated during the past few decades in the field of human-motion recording and analysis. However, in most of these studies body...to meet the COMBIMAN model requirements. Past Human Motion Investigations The 15th century artist-scientist, Leonardo da Vinci, is generally credited
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
Human body contour data based activity recognition.
Myagmarbayar, Nergui; Yuki, Yoshida; Imamoglu, Nevrez; Gonzalez, Jose; Otake, Mihoko; Yu, Wenwei
2013-01-01
This research work is aimed to develop autonomous bio-monitoring mobile robots, which are capable of tracking and measuring patients' motions, recognizing the patients' behavior based on observation data, and providing calling for medical personnel in emergency situations in home environment. The robots to be developed will bring about cost-effective, safe and easier at-home rehabilitation to most motor-function impaired patients (MIPs). In our previous research, a full framework was established towards this research goal. In this research, we aimed at improving the human activity recognition by using contour data of the tracked human subject extracted from the depth images as the signal source, instead of the lower limb joint angle data used in the previous research, which are more likely to be affected by the motion of the robot and human subjects. Several geometric parameters, such as, the ratio of height to weight of the tracked human subject, and distance (pixels) between centroid points of upper and lower parts of human body, were calculated from the contour data, and used as the features for the activity recognition. A Hidden Markov Model (HMM) is employed to classify different human activities from the features. Experimental results showed that the human activity recognition could be achieved with a high correct rate.
A Single Camera Motion Capture System for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Okada, Ryuzo; Stenger, Björn
This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.
NASA Astrophysics Data System (ADS)
Wickenheiser, Adam; Garcia, Ephrahim
2010-04-01
In much of the vibration-based energy harvesting literature, devices are modeled, designed, and tested for dissipating energy across a resistive load at a single base excitation frequency. This paper presents several practical scenarios germane to tracking, sensing, and wireless communication on humans and land vehicles. Measured vibrational data from these platforms are used to provide a time-varying, broadband input to the energy harvesting system. Optimal power considerations are given for several circuit topologies, including a passive rectifier circuit and active, switching methods. Under various size and mass constraints, the optimal design is presented for two scenarios: walking and idling a car. The frequency response functions are given alongside time histories of the power harvested using the experimental base accelerations recorded. The issues involved in designing an energy harvester for practical (i.e. timevarying, non-sinusoidal) applications are discussed.
Do rhesus monkeys (Macaca mulatta) perceive illusory motion?
Agrillo, Christian; Gori, Simone; Beran, Michael J
2015-07-01
During the last decade, visual illusions have been used repeatedly to understand similarities and differences in visual perception of human and non-human animals. However, nearly all studies have focused only on illusions not related to motion perception, and to date, it is unknown whether non-human primates perceive any kind of motion illusion. In the present study, we investigated whether rhesus monkeys (Macaca mulatta) perceived one of the most popular motion illusions in humans, the Rotating Snake illusion (RSI). To this purpose, we set up four experiments. In Experiment 1, subjects initially were trained to discriminate static versus dynamic arrays. Once reaching the learning criterion, they underwent probe trials in which we presented the RSI and a control stimulus identical in overall configuration with the exception that the order of the luminance sequence was changed in a way that no apparent motion is perceived by humans. The overall performance of monkeys indicated that they spontaneously classified RSI as a dynamic array. Subsequently, we tested adult humans in the same task with the aim of directly comparing the performance of human and non-human primates (Experiment 2). In Experiment 3, we found that monkeys can be successfully trained to discriminate between the RSI and a control stimulus. Experiment 4 showed that a simple change in luminance sequence in the two arrays could not explain the performance reported in Experiment 3. These results suggest that some rhesus monkeys display a human-like perception of this motion illusion, raising the possibility that the neurocognitive systems underlying motion perception may be similar between human and non-human primates.
Automated video-based assessment of surgical skills for training and evaluation in medical schools.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan
2016-09-01
Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.
Visual Motion Perception and Visual Attentive Processes.
1988-04-01
88-0551 Visual Motion Perception and Visual Attentive Processes George Spering , New YorkUnivesity A -cesson For DTIC TAB rant AFOSR 85-0364... Spering . HIPSt: A Unix-based image processing syslem. Computer Vision, Graphics, and Image Processing, 1984,25. 331-347. ’HIPS is the Human Information...Processing Laboratory’s Image Processing System. 1985 van Santen, Jan P. It, and George Spering . Elaborated Reichardt detectors. Journal of the Optical
DNA Encoding Training Using 3D Gesture Interaction.
Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara
2017-01-01
The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.
Dimensional coordinate measurements: application in characterizing cervical spine motion
NASA Astrophysics Data System (ADS)
Zheng, Weilong; Li, Linan; Wang, Shibin; Wang, Zhiyong; Shi, Nianke; Xue, Yuan
2014-06-01
Cervical spine as a complicated part in the human body, the form of its movement is diverse. The movements of the segments of vertebrae are three-dimensional, and it is reflected in the changes of the angle between two joint and the displacement in different directions. Under normal conditions, cervical can flex, extend, lateral flex and rotate. For there is no relative motion between measuring marks fixed on one segment of cervical vertebra, the cervical vertebrae with three marked points can be seen as a body. Body's motion in space can be decomposed into translational movement and rotational movement around a base point .This study concerns the calculation of dimensional coordinate of the marked points pasted to the human body's cervical spine by an optical method. Afterward, these measures will allow the calculation of motion parameters for every spine segment. For this study, we choose a three-dimensional measurement method based on binocular stereo vision. The object with marked points is placed in front of the CCD camera. Through each shot, we will get there two parallax images taken from different cameras. According to the principle of binocular vision we can be realized three-dimensional measurements. Cameras are erected parallelly. This paper describes the layout of experimental system and a mathematical model to get the coordinates.
Methodology for estimating human perception to tremors in high-rise buildings
NASA Astrophysics Data System (ADS)
Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien
2017-07-01
Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.
Review-Research on the physical training model of human body based on HQ.
Junjie, Liu
2016-11-01
Health quotient (HQ) is the newest health culture and concept in the 21st century, and the analysis of the human body sports model is not enough mature at present, what's more, the purpose of this paper is to study the integration of the two subjects the health quotient and the sport model. This paper draws the conclusion that physical training and education in colleges and universities can improve the health quotient, and it will make students possess a more healthy body and mind. Then through a new rigid body model of sports to simulate the human physical exercise. After that this paper has an in-depth study on the dynamic model of the human body movement on the basis of establishing the matrix and equation. The simulation results of the human body bicycle riding and pole throwing show that the human body joint movement simulation can be realized and it has a certain operability as well. By means of such simulated calculation, we can come to a conclusion that the movement of the ankle joint, knee joint and hip joint's motion law and real motion are basically the same. So it further verify the accuracy of the motion model, which lay the foundation of other research movement model, also, the study of the movement model is an important method in the study of human health in the future.
Whole-Motion Model of Perception during Forward- and Backward-Facing Centrifuge Runs
Holly, Jan E.; Vrublevskis, Arturs; Carlson, Lindsay E.
2009-01-01
Illusory perceptions of motion and orientation arise during human centrifuge runs without vision. Asymmetries have been found between acceleration and deceleration, and between forward-facing and backward-facing runs. Perceived roll tilt has been studied extensively during upright fixed-carriage centrifuge runs, and other components have been studied to a lesser extent. Certain, but not all, perceptual asymmetries in acceleration-vs-deceleration and forward-vs-backward motion can be explained by existing analyses. The immediate acceleration-deceleration roll-tilt asymmetry can be explained by the three-dimensional physics of the external stimulus; in addition, longer-term data has been modeled in a standard way using physiological time constants. However, the standard modeling approach is shown in the present research to predict forward-vs-backward-facing symmetry in perceived roll tilt, contradicting experimental data, and to predict perceived sideways motion, rather than forward or backward motion, around a curve. The present work develops a different whole-motion-based model taking into account the three-dimensional form of perceived motion and orientation. This model predicts perceived forward or backward motion around a curve, and predicts additional asymmetries such as the forward-backward difference in roll tilt. This model is based upon many of the same principles as the standard model, but includes an additional concept of familiarity of motions as a whole. PMID:19208962
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
Detecting persons concealed in a vehicle
Tucker, Jr., Raymond W.
2005-03-29
An improved method for detecting the presence of humans or animals concealed within in a vehicle uses a combination of the continuous wavelet transform and a ratio-based energy calculation to determine whether the motion detected using seismic sensors placed on the vehicle is due to the presence of a heartbeat within the vehicle or is the result of motion caused by external factors such as the wind. The method performs well in the presence of light to moderate ambient wind levels, producing far fewer false alarm indications. The new method significantly improves the range of ambient environmental conditions under which human presence detection systems can reliably operate.
NASA Astrophysics Data System (ADS)
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-08-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided with accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32 bit packets, where averaging of lines-of-response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic LOR (pLOR) position technique that addresses axial and transaxial LOR grouping in 32 bit data. Second, two simplified approaches for 3D time-of-flight (TOF) scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + TOF (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32 bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction.
Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.
2013-01-01
Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32-bit packets, where averaging of lines of response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic assignment of LOR positions (pLOR) that addresses axial and transaxial LOR grouping in 32-bit data. Second, two simplified approaches for 3D TOF scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + time-of-flight (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32-bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction. PMID:23892635
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.
Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K
2013-03-01
Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.
Evaluation of human dynamic balance in Grassmann manifold
NASA Astrophysics Data System (ADS)
Michalczuk, Agnieszka; Wereszczyński, Kamil; Mucha, Romualda; Świtoński, Adam; Josiński, Henryk; Wojciechowski, Konrad
2017-07-01
The authors present an application of Grassmann manifold to the evaluation of human dynamic balance based on the time series representing movements of hip, knee and ankle joints in the sagittal, frontal and transverse planes. Time series were extracted from gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland using the Vicon system.
Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S
2010-01-01
This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.
Ma, Yingliang; Paterson, Helena M; Pollick, Frank E
2006-02-01
We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.
Ding, Yichun; Yang, Jack; Tolle, Charles R; Zhu, Zhengtao
2018-05-09
Flexible and wearable pressure sensor may offer convenient, timely, and portable solutions to human motion detection, yet it is a challenge to develop cost-effective materials for pressure sensor with high compressibility and sensitivity. Herein, a cost-efficient and scalable approach is reported to prepare a highly flexible and compressible conductive sponge for piezoresistive pressure sensor. The conductive sponge, poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS)@melamine sponge (MS), is prepared by one-step dip coating the commercial melamine sponge (MS) in an aqueous dispersion of poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS). Due to the interconnected porous structure of MS, the conductive PEDOT:PSS@MS has a high compressibility and a stable piezoresistive response at the compressive strain up to 80%, as well as good reproducibility over 1000 cycles. Thereafter, versatile pressure sensors fabricated using the conductive PEDOT:PSS@MS sponges are attached to the different parts of human body; the capabilities of these devices to detect a variety of human motions including speaking, finger bending, elbow bending, and walking are evaluated. Furthermore, prototype tactile sensory array based on these pressure sensors is demonstrated.
An Integrated Framework for Human-Robot Collaborative Manipulation.
Sheng, Weihua; Thobbi, Anand; Gu, Ye
2015-10-01
This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.
NASA Astrophysics Data System (ADS)
Yagi, Eiichi; Harada, Daisuke; Kobayashi, Masaaki
A power assist system has lately attracted considerable attention to lifting-up an object without low back pain. We have been developing power assist systems with pneumatic actuators for the elbow and shoulder to farming support of lifting-up a bag of rice weighing 30kg. This paper describes the mechanism and control method of this power assist system. The pneumatic rotary actuator supports shoulder motion, and the air cylinder supports elbow motion. In this control method, the surface electromyogram(EMG) signals are used as input information of the controller. The joint support torques of human are calculated based on the antigravity term of necessary joint torques, which are estimated on the dynamics of a human approximated link model. The experimental results show the effectiveness of the proposed mechanism and control method of the power assist system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
Scavenging energy from the motion of human lower limbs via a piezoelectric energy harvester
NASA Astrophysics Data System (ADS)
Fan, Kangqi; Yu, Bo; Zhu, Yingmin; Liu, Zhaohui; Wang, Liansong
2017-03-01
Scavenging energy from human motion through piezoelectric transduction has been considered as a feasible alternative to batteries for powering portable devices and realizing self-sustained devices. To date, most piezoelectric energy harvesters (PEHs) developed can only collect energy from the uni-directional mechanical vibration. This deficiency severely limits their applicability to human motion energy harvesting because the human motion involves diverse mechanical motions. In this paper, a novel PEH is proposed to harvest energy from the motion of human lower limbs. This PEH is composed of two piezoelectric cantilever beams, a sleeve and a ferromagnetic ball. The two beams are designed to sense the vibration along the tibial axis and conduct piezoelectric conversion. The ball senses the leg swing and actuates the two beams to vibrate via magnetic coupling. Theoretical and experimental studies indicate that the proposed PEH can scavenge energy from both the vibration and the swing. During each stride, the PEH can produce multiple peaks in voltage output, which is attributed to the superposition of different excitations. Moreover, the root-mean-square (RMS) voltage output of the PEH increases when the walking speed ranges from 2 to 8 km/h. In addition, the ultra-low frequencies of human motion are also up-converted by the proposed design.
Highly Stretchable Multifunctional Wearable Devices Based on Conductive Cotton and Wool Fabrics.
Souri, Hamid; Bhattacharyya, Debes
2018-06-05
The demand for stretchable, flexible, and wearable multifunctional devices based on conductive nanomaterials is rapidly increasing considering their interesting applications including human motion detection, robotics, and human-machine interface. There still exists a great challenge to manufacture stretchable, flexible, and wearable devices through a scalable and cost-effective fabrication method. Herein, we report a simple method for the mass production of electrically conductive textiles, made of cotton and wool, by hybridization of graphene nanoplatelets and carbon black particles. Conductive textiles incorporated into a highly elastic elastomer are utilized as highly stretchable and wearable strain sensors and heaters. The electromechanical characterizations of our multifunctional devices establish their excellent performance as wearable strain sensors to monitor various human motions, such as finger, wrist, and knee joint movements, and to recognize sound with high durability. Furthermore, the electrothermal behavior of our devices shows their potential application as stretchable and wearable heaters working at a maximum temperature of 103 °C powered with 20 V.
Liarokapis, Minas V; Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J; Manolakos, Elias S
2013-09-01
A learning scheme based on random forests is used to discriminate between different reach to grasp movements in 3-D space, based on the myoelectric activity of human muscles of the upper-arm and the forearm. Task specificity for motion decoding is introduced in two different levels: Subspace to move toward and object to be grasped. The discrimination between the different reach to grasp strategies is accomplished with machine learning techniques for classification. The classification decision is then used in order to trigger an EMG-based task-specific motion decoding model. Task specific models manage to outperform "general" models providing better estimation accuracy. Thus, the proposed scheme takes advantage of a framework incorporating both a classifier and a regressor that cooperate advantageously in order to split the task space. The proposed learning scheme can be easily used to a series of EMG-based interfaces that must operate in real time, providing data-driven capabilities for multiclass problems, that occur in everyday life complex environments.
A 4DCT imaging-based breathing lung model with relative hysteresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for bothmore » models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. - Highlights: • We developed a breathing human lung CFD model based on 4D-dynamic CT images. • The 4DCT-based breathing lung model is able to capture lung relative hysteresis. • A new boundary condition for lung model based on one static CT image was proposed. • The difference between lung models based on 4D and static CT images was quantified.« less
New Exoskeleton Arm Concept Design And Actuation For Haptic Interaction With Virtual Objects
NASA Astrophysics Data System (ADS)
Chakarov, D.; Veneva, I.; Tsveov, M.; Tiankov, T.
2014-12-01
In the work presented in this paper the conceptual design and actuation of one new exoskeleton of the upper limb is presented. The device is designed for application where both motion tracking and force feedback are required, such as human interaction with virtual environment or rehabilitation tasks. The choice is presented of mechanical structure kinematical equivalent to the structure of the human arm. An actuation system is selected based on braided pneumatic muscle actuators. Antagonistic drive system for each joint is shown, using pulley and cable transmissions. Force/displacement diagrams are presented of two antagonistic acting muscles. Kinematics and dynamic estimations are performed of the system exoskeleton and upper limb. Selected parameters ensure in the antagonistic scheme joint torque regulation and human arm range of motion.
NASA Astrophysics Data System (ADS)
Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.
2012-02-01
Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.
2012-12-01
autonomy helped to maximize a Mars day journey, because humans could only plan the first portion of the journey based on images sent from the rover...safe trajectory based on its sensors [1]. The distance between Mars and Earth ranges from 100-200 million miles [1] and at this distance, the time...This feature worked for the pre- planned maneuvers, which were planned by humans the day before based on available sensory and visual inputs. Once the
Adaptive Animation of Human Motion for E-Learning Applications
ERIC Educational Resources Information Center
Li, Frederick W. B.; Lau, Rynson W. H.; Komura, Taku; Wang, Meng; Siu, Becky
2007-01-01
Human motion animation has been one of the major research topics in the field of computer graphics for decades. Techniques developed in this area help present human motions in various applications. This is crucial for enhancing the realism as well as promoting the user interest in the applications. To carry this merit to e-learning applications,…
Integrating a Motion Base into a CAVE Automatic Virtual Environment: Phase 1
2001-07-01
this, a CAVE system must perform well in the following motion-related areas: visual gaze stability, simulator sickness, realism (or face validity...and performance validity. Visual Gaze Stability Visual gaze stability, the ability to maintain eye fixation on a particular target, depends upon human...reflexes such as the vestibulo-ocular reflex (VOR) and the optokinetic nystagmus (OKN). VOR is a reflex that counter-rotates the eye relative to the
Motion sickness: a negative reinforcement model.
Bowins, Brad
2010-01-15
Theories pertaining to the "why" of motion sickness are in short supply relative to those detailing the "how." Considering the profoundly disturbing and dysfunctional symptoms of motion sickness, it is difficult to conceive of why this condition is so strongly biologically based in humans and most other mammalian and primate species. It is posited that motion sickness evolved as a potent negative reinforcement system designed to terminate motion involving sensory conflict or postural instability. During our evolution and that of many other species, motion of this type would have impaired evolutionary fitness via injury and/or signaling weakness and vulnerability to predators. The symptoms of motion sickness strongly motivate the individual to terminate the offending motion by early avoidance, cessation of movement, or removal of oneself from the source. The motion sickness negative reinforcement mechanism functions much like pain to strongly motivate evolutionary fitness preserving behavior. Alternative why theories focusing on the elimination of neurotoxins and the discouragement of motion programs yielding vestibular conflict suffer from several problems, foremost that neither can account for the rarity of motion sickness in infants and toddlers. The negative reinforcement model proposed here readily accounts for the absence of motion sickness in infants and toddlers, in that providing strong motivation to terminate aberrant motion does not make sense until a child is old enough to act on this motivation.
Posture-based processing in visual short-term memory for actions.
Vicary, Staci A; Stevens, Catherine J
2014-01-01
Visual perception of human action involves both form and motion processing, which may rely on partially dissociable neural networks. If form and motion are dissociable during visual perception, then they may also be dissociable during their retention in visual short-term memory (VSTM). To elicit form-plus-motion and form-only processing of dance-like actions, individual action frames can be presented in the correct or incorrect order. The former appears coherent and should elicit action perception, engaging both form and motion pathways, whereas the latter appears incoherent and should elicit posture perception, engaging form pathways alone. It was hypothesized that, if form and motion are dissociable in VSTM, then recognition of static body posture should be better after viewing incoherent than after viewing coherent actions. However, as VSTM is capacity limited, posture-based encoding of actions may be ineffective with increased number of items or frames. Using a behavioural change detection task, recognition of a single test posture was significantly more likely after studying incoherent than after studying coherent stimuli. However, this effect only occurred for spans of two (but not three) items and for stimuli with five (but not nine) frames. As in perception, posture and motion are dissociable in VSTM.
Interactions between motion and form processing in the human visual system.
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Interactions between motion and form processing in the human visual system
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286
Hand interception of occluded motion in humans: a test of model-based vs. on-line control
Zago, Myrka; Lacquaniti, Francesco
2015-01-01
Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. PMID:26133803
Strategies of Healthy Adults Walking on a Laterally Oscillating Treadmill
NASA Technical Reports Server (NTRS)
Brady, Rachel A.; Peters, Brian T.; Bloomberg, Jacob J.
2008-01-01
We mounted a treadmill on top of a six degree-of-freedom motion base platform to investigate locomotor responses produced by healthy adults introduced to a dynamic walking surface. The experiment examined self-selected strategies employed by participants when exposed to continuous, sinusoidal lateral motion of the support surface while walking. Torso translation and step width were used to classify responses used to stabilize gait in a novel, dynamic environment. Two response categories emerged. Participants tended to either fix themselves in space (FIS), allowing the treadbelt to move laterally beneath them, or they fixed themselves to the base (FTB), moving laterally as the motion base oscillated. The degree of fixation in both extremes varied across participants. This finding suggests that normal adults have innate and varied preferences for reacquiring gait stability, some depending more heavily on vision (FIS group) and others on proprioception (FTB group). Keywords: Human locomotion, Unstable surface, Treadmill, Adaptation, Stability
Data-driven approach to human motion modeling with Lua and gesture description language
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Koptyra, Katarzyna; Ogiela, Marek R.
2017-03-01
The aim of this paper is to present the novel proposition of the human motion modelling and recognition approach that enables real time MoCap signal evaluation. By motions (actions) recognition we mean classification. The role of this approach is to propose the syntactic description procedure that can be easily understood, learnt and used in various motion modelling and recognition tasks in all MoCap systems no matter if they are vision or wearable sensor based. To do so we have prepared extension of Gesture Description Language (GDL) methodology that enables movements description and real-time recognition so that it can use not only positional coordinates of body joints but virtually any type of discreetly measured output MoCap signals like accelerometer, magnetometer or gyroscope. We have also prepared and evaluated the cross-platform implementation of this approach using Lua scripting language and JAVA technology. This implementation is called Data Driven GDL (DD-GDL). In tested scenarios the average execution speed is above 100 frames per second which is an acquisition time of many popular MoCap solutions.
The validation of a human force model to predict dynamic forces resulting from multi-joint motions
NASA Technical Reports Server (NTRS)
Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.
1992-01-01
The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.
Anand, Sulekha; Bridgeman, Bruce
2002-02-01
Perception of image displacement is suppressed during saccadic eye movements. We probed the source of saccadic suppression of displacement by testing whether it selectively affects chromatic- or luminance-based motion information. Human subjects viewed a stimulus in which chromatic and luminance cues provided conflicting information about displacement direction. Apparent motion occurred during either fixation or a 19.5 degree saccade. Subjects detected motion and discriminated displacement direction in each trial. They reported motion in over 90% of fixation trials and over 70% of saccade trials. During fixation, the probability of perceiving the direction carried by chromatic cues decreased as luminance contrast increased. During saccades, subjects tended to perceive the direction indicated by luminance cues when luminance contrast was high. However, when luminance contrast was low, subjects showed no preference for the chromatic- or luminance-based direction. Thus magnocellular channels are suppressed, while stimulation of parvocellular channels is below threshold, so that neither channel drives motion perception during saccades. These results confirm that magnocellular inhibition is the source of saccadic suppression.
Flight Simulator and Training Human Factors Validation
NASA Technical Reports Server (NTRS)
Glaser, Scott T.; Leland, Richard
2009-01-01
Loss of control has been identified as the leading cause of aircraft accidents in recent years. Efforts have been made to better equip pilots to deal with these types of events, commonly referred to as upsets. A major challenge in these endeavors has been recreating the motion environments found in flight as the majority of upsets take place well beyond the normal operating envelope of large aircraft. The Environmental Tectonics Corporation has developed a simulator motion base, called GYROLAB, that is capable of recreating the sustained accelerations, or G-forces, and motions of flight. A two part research study was accomplished that coupled NASA's Generic Transport Model with a GYROLAB device. The goal of the study was to characterize physiological effects of the upset environment and to demonstrate that a sustained motion based simulator can be an effective means for upset recovery training. Two groups of 25 Air Transport Pilots participated in the study. The results showed reliable signs of pilot arousal at specific stages of similar upsets. Further validation also demonstrated that sustained motion technology was successful in improving pilot performance during recovery following an extensive training program using GYROLAB technology.
Zanotti-Fregonara, Paolo; Liow, Jeih-San; Comtat, Claude; Zoghbi, Sami S; Zhang, Yi; Pike, Victor W; Fujita, Masahiro; Innis, Robert B
2012-09-01
Image-derived input function (IDIF) from carotid arteries is an elegant alternative to full arterial blood sampling for brain PET studies. However, a recent study using blood-free IDIFs found that this method is particularly vulnerable to patient motion. The present study used both simulated and clinical [11C](R)-rolipram data to assess the robustness of a blood-based IDIF method (a method that is ultimately normalized with blood samples) with regard to motion artifacts. The impact of motion on the accuracy of IDIF was first assessed with an analytical simulation of a high-resolution research tomograph using a numerical phantom of the human brain, equipped with internal carotids. Different degrees of translational (from 1 to 20 mm) and rotational (from 1 to 15°) motions were tested. The impact of motion was then tested on the high-resolution research tomograph dynamic scans of three healthy volunteers, reconstructed with and without an online motion correction system. IDIFs and Logan-distribution volume (VT) values derived from simulated and clinical scans with motion were compared with those obtained from the scans with motion correction. In the phantom scans, the difference in the area under the curve (AUC) for the carotid time-activity curves was up to 19% for rotations and up to 66% for translations compared with the motionless simulation. However, for the final IDIFs, which were fitted to blood samples, the AUC difference was 11% for rotations and 8% for translations. Logan-VT errors were always less than 10%, except for the maximum translation of 20 mm, in which the error was 18%. Errors in the clinical scans without motion correction appeared to be minor, with differences in AUC and Logan-VT always less than 10% compared with scans with motion correction. When a blood-based IDIF method is used for neurological PET studies, the motion of the patient affects IDIF estimation and kinetic modeling only minimally.
NASA Astrophysics Data System (ADS)
Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.
2015-04-01
This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the performance of human observers in detecting regional motion defects in 4D gated MP SPECT images. The result supports the use of the observer model in the optimization and evaluation of 4D image reconstruction and compensation methods for improving the detection of motion abnormalities in 4D gated MP SPECT images.
Sugita, Norihiro; Yoshizawa, Makoto; Abe, Makoto; Tanaka, Akira; Watanabe, Takashi; Chiba, Shigeru; Yambe, Tomoyuki; Nitta, Shin-ichi
2007-09-28
Computer graphics and virtual reality techniques are useful to develop automatic and effective rehabilitation systems. However, a kind of virtual environment including unstable visual images presented to wide field screen or a head mounted display tends to induce motion sickness. The motion sickness induced in using a rehabilitation system not only inhibits effective training but also may harm patients' health. There are few studies that have objectively evaluated the effects of the repetitive exposures to these stimuli on humans. The purpose of this study is to investigate the adaptation to visually induced motion sickness by physiological data. An experiment was carried out in which the same video image was presented to human subjects three times. We evaluated changes of the intensity of motion sickness they suffered from by a subjective score and the physiological index rho(max), which is defined as the maximum cross-correlation coefficient between heart rate and pulse wave transmission time and is considered to reflect the autonomic nervous activity. The results showed adaptation to visually-induced motion sickness by the repetitive presentation of the same image both in the subjective and the objective indices. However, there were some subjects whose intensity of sickness increased. Thus, it was possible to know the part in the video image which related to motion sickness by analyzing changes in rho(max) with time. The physiological index, rho(max), will be a good index for assessing the adaptation process to visually induced motion sickness and may be useful in checking the safety of rehabilitation systems with new image technologies.
Energy harvesting from dancing: for broadening in participation in STEM fields
NASA Astrophysics Data System (ADS)
Hamidi, Armita; Tadesse, Yonas
2016-04-01
Energy harvesting from structure vibration, human motion or environmental source has been the focus of researchers in the past few decades. This paper proposes a novel design that is suitable to harvest energy from human motions such as dancing or physical exercise and use the device to engage young students in Science, Technology, Engineering and Math (STEM) fields and outreach activities. The energy harvester (EH) device was designed for a dominant human operational frequency range of 1-5 Hz and it can be wearable by human. We proposed to incorporate different genres of music coupled with energy harvesting technologies for motivation and energy generation. Students will learn both science and art together, since the energy harvesting requires understanding basic physical phenomena and the art enables various physical movements that imparts the largest motion transfer to the EH device. Therefore, the systems are coupled to each other. Young people follow music updates more than robotics or energy harvesting researches. Most popular videos on YouTube and VEVO are viewed more than 100 million times. Perhaps, integrating the energy harvesting research with music or physical exercise might enhance students' engagement in science, and needs investigation. A multimodal energy harvester consisting of piezoelectric and electromagnetic subsystems, which can be wearable in the leg, is proposed in this study. Three piezoelectric cantilever beams having permanent magnets at the ends are connected to a base through a slip ring. Stationary electromagnetic coils are installed in the base and connected in series. Whenever the device is driven by any oscillation parallel to the base, the unbalanced rotor will rotate generating energy across the stationary coils in the base. In another case, if the device is driven by an oscillation perpendicular to the base, a stress will be induced within the cantilever beams generating energy across the piezoelectric materials.
A Miniature Electromechanical Generator Design Utilizing Human Motion
2010-09-01
Inductance Operating Range In the previous chapter, it was mentioned that the EMF induced from the generator was related to a time-changing magnetic...ELECTROMECHANICAL GENERATOR DESIGN UTILIZING HUMAN MOTION by Nicholas G. Hoffman September 2010 Thesis Co-Advisors: Alexander L. Julian...AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE A Miniature Electromechanical Generator Design Utilizing Human Motion 5. FUNDING NUMBERS
Computerized method to compensate for breathing body motion in dynamic chest radiographs
NASA Astrophysics Data System (ADS)
Matsuda, H.; Tanaka, R.; Sanada, S.
2017-03-01
Dynamic chest radiography combined with computer analysis allows quantitative analyses on pulmonary function and rib motion. The accuracy of kinematic analysis is directly linked to diagnostic accuracy, and thus body motion compensation is a major concern. Our purpose in this study was to develop a computerized method to reduce a breathing body motion in dynamic chest radiographs. Dynamic chest radiographs of 56 patients were obtained using a dynamic flat-panel detector. The images were divided into a 1 cm-square and the squares on body counter were used to detect the body motion. Velocity vector was measured using cross-correlation method on the body counter and the body motion was then determined on the basis of the summation of motion vector. The body motion was then compensated by shifting the images based on the measured vector. By using our method, the body motion was accurately detected by the order of a few pixels in clinical cases, mean 82.5% in right and left directions. In addition, our method detected slight body motion which was not able to be identified by human observations. We confirmed our method effectively worked in kinetic analysis of rib motion. The present method would be useful for the reduction of a breathing body motion in dynamic chest radiography.
Event Recognition for Contactless Activity Monitoring Using Phase-Modulated Continuous Wave Radar.
Forouzanfar, Mohamad; Mabrouk, Mohamed; Rajan, Sreeraman; Bolic, Miodrag; Dajani, Hilmi R; Groza, Voicu Z
2017-02-01
The use of remote sensing technologies such as radar is gaining popularity as a technique for contactless detection of physiological signals and analysis of human motion. This paper presents a methodology for classifying different events in a collection of phase modulated continuous wave radar returns. The primary application of interest is to monitor inmates where the presence of human vital signs amidst different, interferences needs to be identified. A comprehensive set of features is derived through time and frequency domain analyses of the radar returns. The Bhattacharyya distance is used to preselect the features with highest class separability as the possible candidate features for use in the classification process. The uncorrelated linear discriminant analysis is performed to decorrelate, denoise, and reduce the dimension of the candidate feature set. Linear and quadratic Bayesian classifiers are designed to distinguish breathing, different human motions, and nonhuman motions. The performance of these classifiers is evaluated on a pilot dataset of radar returns that contained different events including breathing, stopped breathing, simple human motions, and movement of fan and water. Our proposed pattern classification system achieved accuracies of up to 93% in stationary subject detection, 90% in stop-breathing detection, and 86% in interference detection. Our proposed radar pattern recognition system was able to accurately distinguish the predefined events amidst interferences. Besides inmate monitoring and suicide attempt detection, this paper can be extended to other radar applications such as home-based monitoring of elderly people, apnea detection, and home occupancy detection.
Correspondences between What Infants See and Know about Causal and Self-Propelled Motion
ERIC Educational Resources Information Center
Cicchino, Jessica B.; Aslin, Richard N.; Rakison, David H.
2011-01-01
The associative learning account of how infants identify human motion rests on the assumption that this knowledge is derived from statistical regularities seen in the world. Yet, no catalog exists of what visual input infants receive of human motion, and of causal and self-propelled motion in particular. In this manuscript, we demonstrate that the…
Mechanisms of time-based figure-ground segregation.
Kandil, Farid I; Fahle, Manfred
2003-11-01
Figure-ground segregation can rely on purely temporal information, that is, on short temporal delays between positional changes of elements in figure and ground (Kandil, F.I. & Fahle, M. (2001) Eur. J. Neurosci., 13, 2004-2008). Here, we investigate the underlying mechanisms by measuring temporal segregation thresholds for various kinds of motion cues. Segregation can rely on monocular first-order motion (based on luminance modulation) and second-order motion cues (contrast modulation) with a high temporal resolution of approximately 20 ms. The mechanism can also use isoluminant motion with a reduced temporal resolution of 60 ms. Figure-ground segregation can be achieved even at presentation frequencies too high for human subjects to inspect successive frames individually. In contrast, when stimuli are presented dichoptically, i.e. separately to both eyes, subjects are unable to perceive any segregation, irrespective of temporal frequency. We propose that segregation in these displays is detected by a mechanism consisting of at least two stages. On the first level, standard motion or flicker detectors signal local positional changes (flips). On the second level, a segregation mechanism combines the local activities of the low-level detectors with high temporal precision. Our findings suggest that the segregation mechanism can rely on monocular detectors but not on binocular mechanisms. Moreover, the results oppose the idea that segregation in these displays is achieved by motion detectors of a higher order (motion-from-motion), but favour mechanisms sensitive to short temporal delays even without activation of higher-order motion detectors.
Surface EMG signals based motion intent recognition using multi-layer ELM
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Qi, Lin; Wang, Xiao
2017-11-01
The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.
NASA Astrophysics Data System (ADS)
Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.
1997-12-01
This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.
A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors
Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.
2017-01-01
Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563
Pu, Xianjie; Guo, Hengyu; Chen, Jie; Wang, Xue; Xi, Yi; Hu, Chenguo; Wang, Zhong Lin
2017-07-01
Mechnosensational human-machine interfaces (HMIs) can greatly extend communication channels between human and external devices in a natural way. The mechnosensational HMIs based on biopotential signals have been developing slowly owing to the low signal-to-noise ratio and poor stability. In eye motions, the corneal-retinal potential caused by hyperpolarization and depolarization is very weak. However, the mechanical micromotion of the skin around the corners of eyes has never been considered as a good trigger signal source. We report a novel triboelectric nanogenerator (TENG)-based micromotion sensor enabled by the coupling of triboelectricity and electrostatic induction. By using an indium tin oxide electrode and two opposite tribomaterials, the proposed flexible and transparent sensor is capable of effectively capturing eye blink motion with a super-high signal level (~750 mV) compared with the traditional electrooculogram approach (~1 mV). The sensor is fixed on a pair of glasses and applied in two real-time mechnosensational HMIs-the smart home control system and the wireless hands-free typing system with advantages of super-high sensitivity, stability, easy operation, and low cost. This TENG-based micromotion sensor is distinct and unique in its fundamental mechanism, which provides a novel design concept for intelligent sensor technique and shows great potential application in mechnosensational HMIs.
Pu, Xianjie; Guo, Hengyu; Chen, Jie; Wang, Xue; Xi, Yi; Hu, Chenguo; Wang, Zhong Lin
2017-01-01
Mechnosensational human-machine interfaces (HMIs) can greatly extend communication channels between human and external devices in a natural way. The mechnosensational HMIs based on biopotential signals have been developing slowly owing to the low signal-to-noise ratio and poor stability. In eye motions, the corneal-retinal potential caused by hyperpolarization and depolarization is very weak. However, the mechanical micromotion of the skin around the corners of eyes has never been considered as a good trigger signal source. We report a novel triboelectric nanogenerator (TENG)–based micromotion sensor enabled by the coupling of triboelectricity and electrostatic induction. By using an indium tin oxide electrode and two opposite tribomaterials, the proposed flexible and transparent sensor is capable of effectively capturing eye blink motion with a super-high signal level (~750 mV) compared with the traditional electrooculogram approach (~1 mV). The sensor is fixed on a pair of glasses and applied in two real-time mechnosensational HMIs—the smart home control system and the wireless hands-free typing system with advantages of super-high sensitivity, stability, easy operation, and low cost. This TENG-based micromotion sensor is distinct and unique in its fundamental mechanism, which provides a novel design concept for intelligent sensor technique and shows great potential application in mechnosensational HMIs. PMID:28782029
Influence of the model's degree of freedom on human body dynamics identification.
Maita, Daichi; Venture, Gentiane
2013-01-01
In fields of sports and rehabilitation, opportunities of using motion analysis of the human body have dramatically increased. To analyze the motion dynamics, a number of subject specific parameters and measurements are required. For example the contact forces measurement and the inertial parameters of each segment of the human body are necessary to compute the joint torques. In this study, in order to perform accurate dynamic analysis we propose to identify the inertial parameters of the human body and to evaluate the influence of the model's number of degrees of freedom (DoF) on the results. We use a method to estimate the inertial parameters without torque sensor, using generalized coordinates of the base link, joint angles and external forces information. We consider a 34DoF model, a 58DoF model, as well as the case when the human is manipulating a tool (here a tennis racket). We compare the obtained in results in terms of contact force estimation.
Design of a 6-DOF upper limb rehabilitation exoskeleton with parallel actuated joints.
Chen, Yanyan; Li, Ge; Zhu, Yanhe; Zhao, Jie; Cai, Hegao
2014-01-01
In this paper, a 6-DOF wearable upper limb exoskeleton with parallel actuated joints which perfectly mimics human motions is proposed. The upper limb exoskeleton assists the movement of physically weak people. Compared with the existing upper limb exoskeletons which are mostly designed using a serial structure with large movement space but small stiffness and poor wearable ability, a prototype for motion assistance based on human anatomy structure has been developed in our design. Moreover, the design adopts balls instead of bearings to save space, which simplifies the structure and reduces the cost of the mechanism. The proposed design also employs deceleration processes to ensure that the transmission ratio of each joint is coincident.
Multi-model approach to characterize human handwriting motion.
Chihi, I; Abdelkrim, A; Benrejeb, M
2016-02-01
This paper deals with characterization and modelling of human handwriting motion from two forearm muscle activity signals, called electromyography signals (EMG). In this work, an experimental approach was used to record the coordinates of a pen tip moving on the (x, y) plane and EMG signals during the handwriting act. The main purpose is to design a new mathematical model which characterizes this biological process. Based on a multi-model approach, this system was originally developed to generate letters and geometric forms written by different writers. A Recursive Least Squares algorithm is used to estimate the parameters of each sub-model of the multi-model basis. Simulations show good agreement between predicted results and the recorded data.
Exploring the Solar System with a Human Orrery
NASA Astrophysics Data System (ADS)
Newbury, Peter
2010-12-01
One of the fundamental learning goals of introductory astronomy is for the students to gain some perspective on the scale and structure of the solar system. Many astronomy teachers have laid out the planets along a long strip of paper1 or across a school grounds or campus.2 Other activities that investigate the motion of the planets are often computer based,34 hiding the awe-inspiring distances between the planets. Our human orrery activity, adapted from the design at the Armagh Observatory in Ireland,567 combines the best of both approaches by creating a working model of the solar system that mimics both the scale and the motion of the planets.
2011-01-01
Background Biomechanical energy harvesting from human motion presents a promising clean alternative to electrical power supplied by batteries for portable electronic devices and for computerized and motorized prosthetics. We present the theory of energy harvesting from the human body and describe the amount of energy that can be harvested from body heat and from motions of various parts of the body during walking, such as heel strike; ankle, knee, hip, shoulder, and elbow joint motion; and center of mass vertical motion. Methods We evaluated major motions performed during walking and identified the amount of work the body expends and the portion of recoverable energy. During walking, there are phases of the motion at the joints where muscles act as brakes and energy is lost to the surroundings. During those phases of motion, the required braking force or torque can be replaced by an electrical generator, allowing energy to be harvested at the cost of only minimal additional effort. The amount of energy that can be harvested was estimated experimentally and from literature data. Recommendations for future directions are made on the basis of our results in combination with a review of state-of-the-art biomechanical energy harvesting devices and energy conversion methods. Results For a device that uses center of mass motion, the maximum amount of energy that can be harvested is approximately 1 W per kilogram of device weight. For a person weighing 80 kg and walking at approximately 4 km/h, the power generation from the heel strike is approximately 2 W. For a joint-mounted device based on generative braking, the joints generating the most power are the knees (34 W) and the ankles (20 W). Conclusions Our theoretical calculations align well with current device performance data. Our results suggest that the most energy can be harvested from the lower limb joints, but to do so efficiently, an innovative and light-weight mechanical design is needed. We also compared the option of carrying batteries to the metabolic cost of harvesting the energy, and examined the advantages of methods for conversion of mechanical energy into electrical energy. PMID:21521509
A Bayesian model of stereopsis depth and motion direction discrimination.
Read, J C A
2002-02-01
The extraction of stereoscopic depth from retinal disparity, and motion direction from two-frame kinematograms, requires the solution of a correspondence problem. In previous psychophysical work [Read and Eagle (2000) Vision Res 40: 3345-3358], we compared the performance of the human stereopsis and motion systems with correlated and anti-correlated stimuli. We found that, although the two systems performed similarly for narrow-band stimuli, broadband anti-correlated kinematograms produced a strong perception of reversed motion, whereas the stereograms appeared merely rivalrous. I now model these psychophysical data with a computational model of the correspondence problem based on the known properties of visual cortical cells. Noisy retinal images are filtered through a set of Fourier channels tuned to different spatial frequencies and orientations. Within each channel, a Bayesian analysis incorporating a prior preference for small disparities is used to assess the probability of each possible match. Finally, information from the different channels is combined to arrive at a judgement of stimulus disparity. Each model system--stereopsis and motion--has two free parameters: the amount of noise they are subject to, and the strength of their preference for small disparities. By adjusting these parameters independently for each system, qualitative matches are produced to psychophysical data, for both correlated and anti-correlated stimuli, across a range of spatial frequency and orientation bandwidths. The motion model is found to require much higher noise levels and a weaker preference for small disparities. This makes the motion model more tolerant of poor-quality reverse-direction false matches encountered with anti-correlated stimuli, matching the strong perception of reversed motion that humans experience with these stimuli. In contrast, the lower noise level and tighter prior preference used with the stereopsis model means that it performs close to chance with anti-correlated stimuli, in accordance with human psychophysics. Thus, the key features of the experimental data can be reproduced assuming that the motion system experiences more effective noise than the stereoscopy system and imposes a less stringent preference for small disparities.
Riemer, Raziel; Shapiro, Amir
2011-04-26
Biomechanical energy harvesting from human motion presents a promising clean alternative to electrical power supplied by batteries for portable electronic devices and for computerized and motorized prosthetics. We present the theory of energy harvesting from the human body and describe the amount of energy that can be harvested from body heat and from motions of various parts of the body during walking, such as heel strike; ankle, knee, hip, shoulder, and elbow joint motion; and center of mass vertical motion. We evaluated major motions performed during walking and identified the amount of work the body expends and the portion of recoverable energy. During walking, there are phases of the motion at the joints where muscles act as brakes and energy is lost to the surroundings. During those phases of motion, the required braking force or torque can be replaced by an electrical generator, allowing energy to be harvested at the cost of only minimal additional effort. The amount of energy that can be harvested was estimated experimentally and from literature data. Recommendations for future directions are made on the basis of our results in combination with a review of state-of-the-art biomechanical energy harvesting devices and energy conversion methods. For a device that uses center of mass motion, the maximum amount of energy that can be harvested is approximately 1 W per kilogram of device weight. For a person weighing 80 kg and walking at approximately 4 km/h, the power generation from the heel strike is approximately 2 W. For a joint-mounted device based on generative braking, the joints generating the most power are the knees (34 W) and the ankles (20 W). Our theoretical calculations align well with current device performance data. Our results suggest that the most energy can be harvested from the lower limb joints, but to do so efficiently, an innovative and light-weight mechanical design is needed. We also compared the option of carrying batteries to the metabolic cost of harvesting the energy, and examined the advantages of methods for conversion of mechanical energy into electrical energy.
Autogenic-feedback training - A treatment for motion and space sickness
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.
1990-01-01
A training method for preventing the occurrence of motion sickness in humans, called autogenic-feedback training (AFT), is described. AFT is based on a combination of biofeedback and autogenic therapy which involves training physiological self-regulation as an alternative to pharmacological management. AFT was used to reliably increase tolerance to motion-sickness-inducing tests in both men and women ranging in age from 18 to 54 years. The effectiveness of AFT is found to be significantly higher than that of protective adaptation training. Data obtained show that there is no apparent effect from AFT on measures of vestibular perception and no side effects.
Visual information for judging temporal range
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Mowafy, Lyn
1993-01-01
Work in our laboratory suggests that pilots can extract temporal range information (i.e., the time to pass a given waypoint) directly from out-the-window motion information. This extraction does not require the use of velocity or distance, but rather operates solely on a 2-D motion cue. In this paper, we present the mathematical derivation of this information, psychophysical evidence of human observers' sensitivity, and possible advantages and limitations of basing vehicle control on this parameter.
An MRI-compatible platform for one-dimensional motion management studies in MRI.
Nofiele, Joris; Yuan, Qing; Kazem, Mohammad; Tatebe, Ken; Torres, Quinn; Sawant, Amit; Pedrosa, Ivan; Chopra, Rajiv
2016-08-01
Abdominal MRI remains challenging because of respiratory motion. Motion compensation strategies are difficult to compare clinically because of the variability across human subjects. The goal of this study was to evaluate a programmable system for one-dimensional motion management MRI research. A system comprised of a programmable motorized linear stage and computer was assembled and tested in the MRI environment. Tests of the mutual interference between the platform and a whole-body MRI were performed. Organ trajectories generated from a high-temporal resolution scan of a healthy volunteer were used in phantom tests to evaluate the effects of motion on image quality and quantitative MRI measurements. No interference between the motion platform and the MRI was observed, and reliable motion could be produced across a wide range of imaging conditions. Motion-related artifacts commensurate with motion amplitude, frequency, and waveform were observed. T2 measurement of a kidney lesion in an abdominal phantom showed that its value decreased by 67% with physiologic motion, but could be partially recovered with navigator-based motion-compensation. The motion platform can produce reliable linear motion within a whole-body MRI. The system can serve as a foundation for a research platform to investigate and develop motion management approaches for MRI. Magn Reson Med 76:702-712, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Using Three-Dimensional Interactive Graphics To Teach Equipment Procedures.
ERIC Educational Resources Information Center
Hamel, Cheryl J.; Ryan-Jones, David L.
1997-01-01
Focuses on how three-dimensional graphical and interactive features of computer-based instruction can enhance learning and support human cognition during technical training of equipment procedures. Presents guidelines for using three-dimensional interactive graphics to teach equipment procedures based on studies of the effects of graphics, motion,…
Peelen, Marius V; Wiggett, Alison J; Downing, Paul E
2006-03-16
Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.
A stretchable strain sensor based on a metal nanoparticle thin film for human motion detection
NASA Astrophysics Data System (ADS)
Lee, Jaehwan; Kim, Sanghyeok; Lee, Jinjae; Yang, Daejong; Park, Byong Chon; Ryu, Seunghwa; Park, Inkyu
2014-09-01
Wearable strain sensors for human motion detection are being highlighted in various fields such as medical, entertainment and sports industry. In this paper, we propose a new type of stretchable strain sensor that can detect both tensile and compressive strains and can be fabricated by a very simple process. A silver nanoparticle (Ag NP) thin film patterned on the polydimethylsiloxane (PDMS) stamp by a single-step direct transfer process is used as the strain sensing material. The working principle is the change in the electrical resistance caused by the opening/closure of micro-cracks under mechanical deformation. The fabricated stretchable strain sensor shows highly sensitive and durable sensing performances in various tensile/compressive strains, long-term cyclic loading and relaxation tests. We demonstrate the applications of our stretchable strain sensors such as flexible pressure sensors and wearable human motion detection devices with high sensitivity, response speed and mechanical robustness.Wearable strain sensors for human motion detection are being highlighted in various fields such as medical, entertainment and sports industry. In this paper, we propose a new type of stretchable strain sensor that can detect both tensile and compressive strains and can be fabricated by a very simple process. A silver nanoparticle (Ag NP) thin film patterned on the polydimethylsiloxane (PDMS) stamp by a single-step direct transfer process is used as the strain sensing material. The working principle is the change in the electrical resistance caused by the opening/closure of micro-cracks under mechanical deformation. The fabricated stretchable strain sensor shows highly sensitive and durable sensing performances in various tensile/compressive strains, long-term cyclic loading and relaxation tests. We demonstrate the applications of our stretchable strain sensors such as flexible pressure sensors and wearable human motion detection devices with high sensitivity, response speed and mechanical robustness. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03295k
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
Sensing human hand motions for controlling dexterous robots
NASA Technical Reports Server (NTRS)
Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.
1988-01-01
The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.
A cardioid oscillator with asymmetric time ratio for establishing CPG models.
Fu, Q; Wang, D H; Xu, L; Yuan, G
2018-01-13
Nonlinear oscillators are usually utilized by bionic scientists for establishing central pattern generator models for imitating rhythmic motions by bionic scientists. In the natural word, many rhythmic motions possess asymmetric time ratios, which means that the forward and the backward motions of an oscillating process sustain different times within one period. In order to model rhythmic motions with asymmetric time ratios, nonlinear oscillators with asymmetric forward and backward trajectories within one period should be studied. In this paper, based on the property of the invariant set, a method to design the closed curve in the phase plane of a dynamic system as its limit cycle is proposed. Utilizing the proposed method and considering that a cardioid curve is a kind of asymmetrical closed curves, a cardioid oscillator with asymmetric time ratios is proposed and realized. Through making the derivation of the closed curve in the phase plane of a dynamic system equal to zero, the closed curve is designed as its limit cycle. Utilizing the proposed limit cycle design method and according to the global invariant set theory, a cardioid oscillator applying a cardioid curve as its limit cycle is achieved. On these bases, the numerical simulations are conducted for analyzing the behaviors of the cardioid oscillator. The example utilizing the established cardioid oscillator to simulate rhythmic motions of the hip joint of a human body in the sagittal plane is presented. The results of the numerical simulations indicate that, whatever the initial condition is and without any outside input, the proposed cardioid oscillator possesses the following properties: (1) The proposed cardioid oscillator is able to generate a series of periodic and anti-interference self-exciting trajectories, (2) the generated trajectories possess an asymmetric time ratio, and (3) the time ratio can be regulated by adjusting the oscillator's parameters. Furthermore, the comparison between the simulated trajectories by the established cardioid oscillator and the measured angle trajectories of the hip angle of a human body show that the proposed cardioid oscillator is fit for imitating the rhythmic motions of the hip of a human body with asymmetric time ratios.
Khan, Hassan Aqeel; Gore, Amit; Ashe, Jeff; Chakrabartty, Shantanu
2017-07-01
Physical activities are known to introduce motion artifacts in electrical impedance plethysmographic (EIP) sensors. Existing literature considers motion artifacts as a nuisance and generally discards the artifact containing portion of the sensor output. This paper examines the notion of exploiting motion artifacts for detecting the underlying physical activities which give rise to the artifacts in question. In particular, we investigate whether the artifact pattern associated with a physical activity is unique; and does it vary from one human-subject to another? Data was recorded from 19 adult human-subjects while conducting 5 distinct, artifact inducing, activities. A set of novel features based on the time-frequency signatures of the sensor outputs are then constructed. Our analysis demonstrates that these features enable high accuracy detection of the underlying physical activity. Using an SVM classifier we are able to differentiate between 5 distinct physical activities (coughing, reaching, walking, eating and rolling-on-bed) with an average accuracy of 85.46%. Classification is performed solely using features designed specifically to capture the time-frequency signatures of different physical activities. This enables us to measure both respiratory and motion information using only one type of sensor. This is in contrast to conventional approaches to physical activity monitoring; which rely on additional hardware such as accelerometers to capture activity information.
Cheng, Jeffrey Tao; Hamade, Mohamad; Merchant, Saumil N.; Rosowski, John J.; Harrington, Ellery; Furlong, Cosme
2013-01-01
Sound-induced motions of the surface of the tympanic membrane (TM) were measured using stroboscopic holography in cadaveric human temporal bones at frequencies between 0.2 and 18 kHz. The results are consistent with the combination of standing-wave-like modal motions and traveling-wave-like motions on the TM surface. The holographic techniques also quantified sound-induced displacements of the umbo of the malleus, as well as volume velocity of the TM. These measurements were combined with sound-pressure measurements near the TM to compute middle-ear input impedance and power reflectance at the TM. The results are generally consistent with other published data. A phenomenological model that behaved qualitatively like the data was used to quantify the relative magnitude and spatial frequencies of the modal and traveling-wave-like displacement components on the TM surface. This model suggests the modal magnitudes are generally larger than those of the putative traveling waves, and the computed wave speeds are much slower than wave speeds predicted by estimates of middle-ear delay. While the data are inconsistent with simple modal displacements of the TM, an alternate model based on the combination of modal motions in a lossy membrane can also explain these measurements without invoking traveling waves. PMID:23363110
Adaptive control of center of mass (global) motion and its joint (local) origin in gait.
Yang, Feng; Pai, Yi-Chung
2014-08-22
Dynamic gait stability can be quantified by the relationship of the motion state (i.e. the position and velocity) between the body center of mass (COM) and its base of support (BOS). Humans learn how to adaptively control stability by regulating the absolute COM motion state (i.e. its position and velocity) and/or by controlling the BOS (through stepping) in a predictable manner, or by doing both simultaneously following an external perturbation that disrupts their regular relationship. Post repeated-slip perturbation training, for instance, older adults learned to forward shift their COM position while walking with a reduced step length, hence reduced their likelihood of slip-induced falls. How and to what extent each individual joint influences such adaptive alterations is mostly unknown. A three-dimensional individualized human kinematic model was established. Based on the human model, sensitivity analysis was used to systematically quantify the influence of each lower limb joint on the COM position relative to the BOS and the step length during gait. It was found that the leading foot had the greatest effect on regulating the COM position relative to the BOS; and both hips bear the most influence on the step length. These findings could guide cost-effective but efficient fall-reduction training paradigm among older population. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait.
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E; Del-Ama, Antonio J; Dimbwadyo, Iris; Moreno, Juan C; Florez, Julian; Pons, Jose L
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E.; del-Ama, Antonio J.; Dimbwadyo, Iris; Moreno, Juan C.; Florez, Julian; Pons, Jose L.
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton. PMID:29755336
Holistic processing of static and moving faces.
Zhao, Mintao; Bülthoff, Isabelle
2017-07-01
Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability-holistic face processing-remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers' expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Miniature low-power inertial sensors: promising technology for implantable motion capture systems.
Lambrecht, Joris M; Kirsch, Robert F
2014-11-01
Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.
Raabe, D; Harrison, A; Ireland, A; Alemzadeh, K; Sandy, J; Dogramadzi, S; Melhuish, C; Burgess, S
2012-03-01
This paper presents a new in vitro wear simulator based on spatial parallel kinematics and a biologically inspired implicit force/position hybrid controller to replicate chewing movements and dental wear formations on dental components, such as crowns, bridges or a full set of teeth. The human mandible, guided by passive structures such as posterior teeth and the two temporomandibular joints, moves with up to 6 degrees of freedom (DOF) in Cartesian space. The currently available wear simulators lack the ability to perform these chewing movements. In many cases, their lack of sufficient DOF enables them only to replicate the sliding motion of a single occlusal contact point by neglecting rotational movements and the motion along one Cartesian axis. The motion and forces of more than one occlusal contact points cannot accurately be replicated by these instruments. Furthermore, the majority of wear simulators are unable to control simultaneously the main wear-affecting parameters, considering abrasive mechanical wear, which are the occlusal sliding motion and bite forces in the constraint contact phase of the human chewing cycle. It has been shown that such discrepancies between the true in vivo and the simulated in vitro condition influence the outcome and the quality of wear studies. This can be improved by implementing biological features of the human masticatory system such as tooth compliance realized through the passive action of the periodontal ligament and active bite force control realized though the central nervous system using feedback from periodontal preceptors. The simulator described in this paper can be used for single- and multi-occlusal contact testing due to its kinematics and ability to exactly replicate human translational and rotational mandibular movements with up to 6 DOF without neglecting movements along or around the three Cartesian axes. Recorded human mandibular motion and occlusal force data are the reference inputs of the simulator. Experimental studies of wear using this simulator demonstrate that integrating the biological feature of combined force/position hybrid control in dental material testing improves the linearity and reduces the variability of results. In addition, it has been shown that present biaxially operated dental wear simulators are likely to provide misleading results in comparative in vitro/in vivo one-contact studies due to neglecting the occlusal sliding motion in one plane which could introduce an error of up to 49% since occlusal sliding motion D and volumetric wear loss V(loss) are proportional.
Hierarchical Shared Control of Cane-Type Walking-Aid Robot
Tao, Chunjing
2017-01-01
A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user's safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method. PMID:29093805
Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.
1986-01-01
A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)
1995-01-01
We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.
NASA Astrophysics Data System (ADS)
Wang, Siqi; Li, Decai
2015-09-01
This paper describes the design and characterization of a plane vibration-based electromagnetic generator that is capable of converting low-frequency vibration energy into electrical energy. A magnetic spring is formed by a magnetic attractive force between fixed and movable permanent magnets. The ferrofluid is employed on the bottom of the movable permanent magnet to suspend it and reduce the mechanical damping as a fluid lubricant. When the electromagnetic generator with a ferrofluid of 0.3 g was operated under a resonance condition, the output power reached 0.27 mW, and the power density of the electromagnetic generator was 5.68 µW/cm2. The electromagnetic generator was also used to harvest energy from human motion. The measured average load powers of the electromagnetic generator from human waist motion were 0.835 mW and 1.3 mW during walking and jogging, respectively.
An EMG-Based Control for an Upper-Limb Power-Assist Exoskeleton Robot.
Kiguchi, K; Hayashi, Y
2012-08-01
Many kinds of power-assist robots have been developed in order to assist self-rehabilitation and/or daily life motions of physically weak persons. Several kinds of control methods have been proposed to control the power-assist robots according to user's motion intention. In this paper, an electromyogram (EMG)-based impedance control method for an upper-limb power-assist exoskeleton robot is proposed to control the robot in accordance with the user's motion intention. The proposed method is simple, easy to design, humanlike, and adaptable to any user. A neurofuzzy matrix modifier is applied to make the controller adaptable to any users. Not only the characteristics of EMG signals but also the characteristics of human body are taken into account in the proposed method. The effectiveness of the proposed method was evaluated by the experiments.
XCAT/DRASIM: a realistic CT/human-model simulation package
NASA Astrophysics Data System (ADS)
Fung, George S. K.; Stierstorfer, Karl; Segars, W. Paul; Taguchi, Katsuyuki; Flohr, Thomas G.; Tsui, Benjamin M. W.
2011-03-01
The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms, this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First, we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated simulation package by performing a number of sample simulations of multiple x-ray projections from different views followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods for improving CT image quality and reducing radiation dose.
Hand interception of occluded motion in humans: a test of model-based vs. on-line control.
La Scaleia, Barbara; Zago, Myrka; Lacquaniti, Francesco
2015-09-01
Two control schemes have been hypothesized for the manual interception of fast visual targets. In the model-free on-line control, extrapolation of target motion is based on continuous visual information, without resorting to physical models. In the model-based control, instead, a prior model of target motion predicts the future spatiotemporal trajectory. To distinguish between the two hypotheses in the case of projectile motion, we asked participants to hit a ball that rolled down an incline at 0.2 g and then fell in air at 1 g along a parabola. By varying starting position, ball velocity and trajectory differed between trials. Motion on the incline was always visible, whereas parabolic motion was either visible or occluded. We found that participants were equally successful at hitting the falling ball in both visible and occluded conditions. Moreover, in different trials the intersection points were distributed along the parabolic trajectories of the ball, indicating that subjects were able to extrapolate an extended segment of the target trajectory. Remarkably, this trend was observed even at the very first repetition of movements. These results are consistent with the hypothesis of model-based control, but not with on-line control. Indeed, ball path and speed during the occlusion could not be extrapolated solely from the kinematic information obtained during the preceding visible phase. The only way to extrapolate ball motion correctly during the occlusion was to assume that the ball would fall under gravity and air drag when hidden from view. Such an assumption had to be derived from prior experience. Copyright © 2015 the American Physiological Society.
Development of single leg version of HAL for hemiplegia.
Kawamoto, Hiroaki; Hayashi, Tomohiro; Sakurai, Takeru; Eguchi, Kiyoshi; Sankai, Yoshiyuki
2009-01-01
Our goal is to try to enhance the QoL of persons with hemiplegia by the mean of an active motion support system based on the HAL's technology. The HAL (Hybrid Assistive Limb) in its standard version is an exoskeleton-based robot suit to support and enhance the human motor functions. The purpose of the research presented in this paper is the development of a new version of the HAL to be used as an assistive device providing walking motion support to persons with hemiplegia. It includes the realization of the single leg version of the HAL and the redesign of the original HAL's Autonomous Controller to execute human-like walking motions in an autonomous way. Clinical trials were conducted in order to assess the effectiveness of the developed system. The first stage of the trials described in this paper involved the participation of one hemiplegic patient who has difficulties to flex his right knee. As a result, the knee flexion support for walking provided by the HAL appeared to improve the subject's walking (longer stride and faster steps). The first evaluation of the system with one subject showed promising results for the future developments.
Hasnain, Zaki; Li, Ming; Dorff, Tanya; Quinn, David; Ueno, Naoto T; Yennu, Sriram; Kolatkar, Anand; Shahabi, Cyrus; Nocera, Luciano; Nieva, Jorge; Kuhn, Peter; Newton, Paul K
2018-05-18
Biomechanical characterization of human performance with respect to fatigue and fitness is relevant in many settings, however is usually limited to either fully qualitative assessments or invasive methods which require a significant experimental setup consisting of numerous sensors, force plates, and motion detectors. Qualitative assessments are difficult to standardize due to their intrinsic subjective nature, on the other hand, invasive methods provide reliable metrics but are not feasible for large scale applications. Presented here is a dynamical toolset for detecting performance groups using a non-invasive system based on the Microsoft Kinect motion capture sensor, and a case study of 37 cancer patients performing two clinically monitored tasks before and after therapy regimens. Dynamical features are extracted from the motion time series data and evaluated based on their ability to i) cluster patients into coherent fitness groups using unsupervised learning algorithms and to ii) predict Eastern Cooperative Oncology Group performance status via supervised learning. The unsupervised patient clustering is comparable to clustering based on physician assigned Eastern Cooperative Oncology Group status in that they both have similar concordance with change in weight before and after therapy as well as unexpected hospitalizations throughout the study. The extracted dynamical features can predict physician, coordinator, and patient Eastern Cooperative Oncology Group status with an accuracy of approximately 80%. The non-invasive Microsoft Kinect sensor and the proposed dynamical toolset comprised of data preprocessing, feature extraction, dimensionality reduction, and machine learning offers a low-cost and general method for performance segregation and can complement existing qualitative clinical assessments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chang, Minsu; Kim, Yeongmin; Lee, Yoseph; Jeon, Doyoung
2017-07-01
This paper proposes a method of detecting the postural stability of a person wearing the lower limb exoskeletal robot with the HAT(Head-Arm-Trunk) model. Previous studies have shown that the human posture is stable when the CoM(Center of Mass) of the human body is placed on the BoS(Base of Support). In the case of the lower limb exoskeletal robot, the motion data, which are used for the CoM estimation, are acquired by sensors in the robot. The upper body, however, does not have sensors in each segment so that it may cause the error of the CoM estimation. In this paper, the HAT(Head-Arm-Trunk) model which combines head, arms, and torso into a single segment is considered because the motion of head and arms are unknown due to the lack of sensors. To verify the feasibility of HAT model, the reflecting markers are attached to each segment of the whole human body and the exact motion data are acquired by the VICON to compare the COM of the full body model and HAT model. The difference between the CoM with full body and that with HAT model is within 20mm for the various motions of head and arms. Based on the HAT model, the XCoM(Extrapolated Center of Mass) which includes the velocity of the CoM is used for prediction of the postural stability. The experiment of making unstable posture shows that the XCoM of the whole body based on the HAT model is feasible to detect the instance of postural instability earlier than the CoM by 20-250 msec. This result may be used for the lower limb exoskeletal robot to prepare for any action to prevent the falling down.
IMU-Based Joint Angle Measurement for Gait Analysis
Seel, Thomas; Raisch, Jorg; Schauer, Thomas
2014-01-01
This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1) joint axis and position identification; and (2) flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU)-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°. PMID:24743160
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Slow motion increases perceived intent
Caruso, Eugene M.; Burns, Zachary C.; Converse, Benjamin A.
2016-01-01
To determine the appropriate punishment for a harmful action, people must often make inferences about the transgressor’s intent. In courtrooms and popular media, such inferences increasingly rely on video evidence, which is often played in “slow motion.” Four experiments (n = 1,610) involving real surveillance footage from a murder or broadcast replays of violent contact in professional football demonstrate that viewing an action in slow motion, compared with regular speed, can cause viewers to perceive an action as more intentional. This slow motion intentionality bias occurred, in part, because slow motion video caused participants to feel like the actor had more time to act, even when they knew how much clock time had actually elapsed. Four additional experiments (n = 2,737) reveal that allowing viewers to see both regular speed and slow motion replay mitigates the bias, but does not eliminate it. We conclude that an empirical understanding of the effect of slow motion on mental state attribution should inform the life-or-death decisions that are currently based on tacit assumptions about the objectivity of human perception. PMID:27482091
Self-organizing neural integration of pose-motion features for human action recognition
Parisi, German I.; Weber, Cornelius; Wermter, Stefan
2015-01-01
The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323
[Comparative analysis of light sensitivity, depth and motion perception in animals and humans].
Schaeffel, F
2017-11-01
This study examined how humans perform regarding light sensitivity, depth perception and motion vision in comparison to various animals. The parameters that limit the performance of the visual system for these different functions were examined. This study was based on literature studies (search in PubMed) and own results. Light sensitivity is limited by the brightness of the retinal image, which in turn is determined by the f‑number of the eye. Furthermore, it is limited by photon noise, thermal decay of rhodopsin, noise in the phototransduction cascade and neuronal processing. In invertebrates, impressive optical tricks have been developed to increase the number of photons reaching the photoreceptors. Furthermore, the spontaneous decay of the photopigment is lower in invertebrates at the cost of higher energy consumption. For depth perception at close range, stereopsis is the most precise but is available only to a few vertebrates. In contrast, motion parallax is used by many species including vertebrates as well as invertebrates. In a few cases accommodation is used for depth measurements or chromatic aberration. In motion vision the temporal resolution of the eye is most important. The ficker fusion frequency correlates in vertebrates with metabolic turnover and body temperature but also has very high values in insects. Apart from that the flicker fusion frequency generally declines with increasing body weight. Compared to animals the performance of the visual system in humans is among the best regarding light sensitivity, is the best regarding depth resolution and in the middle range regarding motion resolution.
Roger's pattern manifestations and health in adolescents.
Yarcheski, A; Mahon, N E
1995-08-01
The purpose of this exploratory study was to examine four manifestations of human-environmental field patterning--human field motion, human field rhythms, creativity, and sentience--in relation to perceived health status in 106 early, 111 middle, and 113 late adolescents. Participants responded to the Perceived Field Motion Instrument (a measure of human field motion), the Human Field Rhythms Scale, the Sentience Scale, the General Health Rating Index (a measure of perceived health status), and a brief demographic data sheet in classroom settings. Data were analyzed using Pearson correlations. Statistically significant positive correlations were found between perceived field motion and perceived health status in early, middle, and late adolescents, between human field rhythms and perceived health status in late adolescents only, and between creativity and perceived health status in late adolescents only. The inverse relationship found between sentience and perceived health status in early, middle, and late adolescents was not statistically significant. The findings are interpreted within a Rogerian framework.
Investigation of anti-motion sickness drugs in the squirrel monkey
NASA Technical Reports Server (NTRS)
Cheung, B. S.; Money, K. E.; Kohl, R. L.; Kinter, L. B.
1992-01-01
Early attempts to develop an animal model for anti-motion sickness drugs, using dogs and cats; were unsuccessful. Dogs did not show a beneficial effect of scopolamine (probably the best single anti-motion sickness drug for humans thus far) and the findings in cats were not definitive. The authors have developed an animal model using the squirrel monkey (Saimiri sciureus) of the Bolivian phenotype. Unrestrained monkeys in a small lucite cage were tested in an apparatus that induces motion sickness by combining vertical oscillation and horizontal rotation in a visually unrestricted laboratory environment. Signs of motion sickness were scored using a rating scale. Ten susceptible monkeys (weighing 800-1000 g) were given a total of five tests each, to establish the baseline susceptibility level. Based on the anticholinergic activity of scopolamine, the sensitivity of squirrel monkey to scopolamine was investigated, and the appropriate dose of scopolamine for this species was determined. Then various anti-motion sickness preparations were administered in subsequent tests: 100 ug scopolamine per monkey; 140 ug dexedrine; 50 ug scopolamine plus 70 ug dexedrine; 100 ug scopolamine plus 140 ug dexedrine; 3 mg promethazine; 3 mg promethazine plus 3 mg ephedrine. All these preparations were significantly effective in preventing motion sickness in the monkeys. Ephedrine, by itself, which is marginally effective in humans, was ineffective in the monkeys at the doses tried (0.3-6.0 mg). The squirrel monkey appears to be a good animal model for antimotion sickness drugs. Peripherally acting antihistamines such as astemizole and terfenadine were found to be ineffective, whereas flunarizine, and an arginine vasopressin V1 antagonist, showed significant activity in preventing motion sickness.
Jeon, Hyungkook; Hong, Seong Kyung; Kim, Min Seo; Cho, Seong J; Lim, Geunbae
2017-12-06
Here, we report an omni-purpose stretchable strain sensor (OPSS sensor) based on a nanocracking structure for monitoring whole-body motions including both joint-level and skin-level motions. By controlling and optimizing the nanocracking structure, inspired by the spider sensory system, the OPSS sensor is endowed with both high sensitivity (gauge factor ≈ 30) and a wide working range (strain up to 150%) under great linearity (R 2 = 0.9814) and fast response time (<30 ms). Furthermore, the fabrication process of the OPSS sensor has advantages of being extremely simple, patternable, integrated circuit-compatible, and reliable in terms of reproducibility. Using the OPSS sensor, we detected various human body motions including both moving of joints and subtle deforming of skin such as pulsation. As specific medical applications of the sensor, we also successfully developed a glove-type hand motion detector and a real-time Morse code communication system for patients with general paralysis. Therefore, considering the outstanding sensing performances, great advantages of the fabrication process, and successful results from a variety of practical applications, we believe that the OPSS sensor is a highly suitable strain sensor for whole-body motion monitoring and has potential for a wide range of applications, such as medical robotics and wearable healthcare devices.
2012-08-01
respiratory motions using 4D tagged magnetic resonance imaging ( MRI ) data and 4D high-resolution respiratory-gated CT data respectively. Both...dimensional segmented human anatomy. Medical Physics, 1994. 21(2): p. 299-302. 6. Zubal, I.G., et al. High resolution, MRI -based, segmented...the beam direction. T2-weighted images were acquired after 24 hours with a 3T- MRI scanner using a turbo spin-echo sequence. Imaging parameters were
Non-actual motion: phenomenological analysis and linguistic evidence.
Blomberg, Johan; Zlatev, Jordan
2015-09-01
Sentences with motion verbs describing static situations have been seen as evidence that language and cognition are geared toward dynamism and change (Talmy in Toward a cognitive semantics, MIT Press, Cambridge, 2000; Langacker in Concept, image, and symbol: the cognitive basis of grammar, Mouton de Gruyter, Berlin and New York, 1990). Different concepts have been used in the literature, e.g., fictive motion, subjective motion and abstract motion to denote this. Based on phenomenological analysis, we reinterpret such concepts as reflecting different motivations for the use of such constructions (Blomberg and Zlatev in Phenom Cogn Sci 13(3):395-418, 2014). To highlight the multifaceted character of the phenomenon, we propose the concept non-actual motion (NAM), which we argue is more compatible with the situated cognition approach than explanations such as "mental simulation" (e.g., Matlock in Studies in linguistic motivation, Mouton de Gruyter, Berlin, 2004). We investigate the expression of NAM by means of a picture-based elicitation task with speakers of Swedish, French and Thai. Pictures represented figures that either afford human motion or not (±afford); crossed with this, the figure extended either across the picture from a third-person perspective (3 pp) or from a first-person perspective (1 pp). All picture types elicited NAM-sentences with the combination [+afford, 1 pp] producing most NAM-sentences in all three languages. NAM-descriptions also conformed to language-specific patterns for the expression of actual motion. We conclude that NAM shows interaction between pre-linguistic motivations and language-specific conventions.
Rantalainen, Timo; Chivers, Paola; Beck, Belinda R; Robertson, Sam; Hart, Nicolas H; Nimphius, Sophia; Weeks, Benjamin K; McIntyre, Fleur; Hands, Beth; Siafarikas, Aris
Most imaging methods, including peripheral quantitative computed tomography (pQCT), are susceptible to motion artifacts particularly in fidgety pediatric populations. Methods currently used to address motion artifact include manual screening (visual inspection) and objective assessments of the scans. However, previously reported objective methods either cannot be applied on the reconstructed image or have not been tested for distal bone sites. Therefore, the purpose of the present study was to develop and validate motion artifact classifiers to quantify motion artifact in pQCT scans. Whether textural features could provide adequate motion artifact classification performance in 2 adolescent datasets with pQCT scans from tibial and radial diaphyses and epiphyses was tested. The first dataset was split into training (66% of sample) and validation (33% of sample) datasets. Visual classification was used as the ground truth. Moderate to substantial classification performance (J48 classifier, kappa coefficients from 0.57 to 0.80) was observed in the validation dataset with the novel texture-based classifier. In applying the same classifier to the second cross-sectional dataset, a slight-to-fair (κ = 0.01-0.39) classification performance was observed. Overall, this novel textural analysis-based classifier provided a moderate-to-substantial classification of motion artifact when the classifier was specifically trained for the measurement device and population. Classification based on textural features may be used to prescreen obviously acceptable and unacceptable scans, with a subsequent human-operated visual classification of any remaining scans. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Badachhape, Andrew A; Okamoto, Ruth J; Johnson, Curtis L; Bayly, Philip V
2018-05-17
The objective of this study was to characterize the relationships between motion in the scalp, skull, and brain. In vivo estimates of motion transmission from the skull to the brain may illuminate the mechanics of traumatic brain injury. Because of challenges in directly sensing skull motion, it is useful to know how well motion of soft tissue of the head, i.e., the scalp, can approximate skull motion or predict brain tissue deformation. In this study, motion of the scalp and brain were measured using magnetic resonance elastography (MRE) and separated into components due to rigid-body displacement and dynamic deformation. Displacement estimates in the scalp were calculated using low motion-encoding gradient strength in order to reduce "phase wrapping" (an ambiguity in displacement estimates caused by the 2 π-periodicity of MRE phase contrast). MRE estimates of scalp and brain motion were compared to skull motion estimated from three tri-axial accelerometers. Comparison of the relative amplitudes and phases of harmonic motion in the scalp, skull, and brain of six human subjects indicate that data from scalp-based sensors should be used with caution to estimate skull kinematics, but that fairly consistent relationships exist between scalp, skull, and brain motion. In addition, the measured amplitude and phase relationships of scalp, skull, and brain can be used to evaluate and improve mathematical models of head biomechanics. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Ying; Zhao, Yunong; Wang, Yang; Guo, Xiaohui; Zhang, Yangyang; Liu, Ping; Liu, Caixia; Zhang, Yugang
2018-03-01
Strain sensors used as flexible and wearable electronic devices have improved prospects in the fields of artificial skin, robotics, human-machine interfaces, and healthcare. This work introduces a highly stretchable fiber-based strain sensor with a laminated structure made up of a graphene nanoplatelet layer and a carbon black/single-walled carbon nanotube synergetic conductive network layer. An ultrathin, flexible, and elastic two-layer polyurethane (PU) yarn substrate was successively deposited by a novel chemical bonding-based layered dip-coating process. These strain sensors demonstrated high stretchability (˜350%), little hysteresis, and long-term durability (over 2400 cycles) due to the favorable tensile properties of the PU substrate. The linearity of the strain sensor could reach an adjusted R-squared of 0.990 at 100% strain, which is better than most of the recently reported strain sensors. Meanwhile, the strain sensor exhibited good sensibility, rapid response, and a lower detection limit. The lower detection limit benefited from the hydrogen bond-assisted laminated structure and continuous conductive path. Finally, a series of experiments were carried out based on the special features of the PU strain sensor to show its capacity of detecting and monitoring tiny human motions.
Human torso phantom for imaging of heart with realistic modes of cardiac and respiratory motion
Boutchko, Rostyslav; Balakrishnan, Karthikayan; Gullberg, Grant T; O& #x27; Neil, James P
2013-09-17
A human torso phantom and its construction, wherein the phantom mimics respiratory and cardiac cycles in a human allowing acquisition of medical imaging data under conditions simulating patient cardiac and respiratory motion.
Developing Educational Computer Animation Based on Human Personality Types
ERIC Educational Resources Information Center
Musa, Sajid; Ziatdinov, Rushan; Sozcu, Omer Faruk; Griffiths, Carol
2015-01-01
Computer animation in the past decade has become one of the most noticeable features of technology-based learning environments. By its definition, it refers to simulated motion pictures showing movement of drawn objects, and is often defined as the art in movement. Its educational application known as educational computer animation is considered…
A Kinect-Based Assessment System for Smart Classroom
ERIC Educational Resources Information Center
Kumara, W. G. C. W.; Wattanachote, Kanoksak; Battulga, Batbaatar; Shih, Timothy K.; Hwang, Wu-Yuin
2015-01-01
With the advancements of the human computer interaction field, nowadays it is possible for the users to use their body motions, such as swiping, pushing and moving, to interact with the content of computers or smart phones without traditional input devices like mouse and keyboard. With the introduction of gesture-based interface Kinect from…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Y; Fan, Z; Yang, W
Purpose: 4D-CT is often limited by motion artifacts, low temporal resolution, and poor phase-based target definition. We recently developed a novel k-space self-gated 4D-MRI technique with high spatial and temporal resolution. The goal here is to geometrically validate 4D-MRI using a MRI-CT compatible respiratory motion phantom and comparison to 4D-CT. Methods: 4D-MRI was acquired using 3T spoiled gradient echo-based 3D projection sequences. Respiratory phases were resolved using self-gated k-space lines as the motion surrogate. Images were reconstructed into 10 temporal bins with 1.56×1.56×1.56mm3. A MRI-CT compatible phantom was designed with a 23mm diameter ball target filled with highconcentration gadolinium(Gd) gelmore » embedded in a 35×40×63mm3 plastic box stabilized with low-concentration Gd gel. The whole phantom was driven by an air pump. Human respiratory motion was mimicked using the controller from a commercial dynamic phantom (RSD). Four breathing settings (rates/depths: 10s/20mm, 6s/15mm, 4s/10mm, 3s/7mm) were scanned with 4D-MRI and 4D-CT (slice thickness 1.25mm). Motion ground-truth was obtained from input signals and real-time video recordings. Reconstructed images were imported into Eclipse(Varian) for target contouring. Volumes and target positions were compared with ground-truth. Initial human study was investigated on a liver patient. Results: 4D-MRI and 4D-CT scans for the different breathing cycles were reconstructed with 10 phases. Target volume in each phase was measured for both 4D-CT and 4D-MRI. Volume percentage difference for the 6.37ml target ranged from 6.67±5.33 to 11.63±5.57 for 4D-CT and from 1.47±0.52 to 2.12±1.60 for 4D-MRI. The Mann-Whitney U-test shows the 4D-MRI is significantly superior to 4D-CT (p=0.021) for phase-based target definition. Centroid motion error ranges were 1.35–1.25mm (4D-CT), and 0.31–0.12mm (4D-MRI). Conclusion: The k-space self-gated 4D-MRI we recently developed can accurately determine phase-based target volume while avoiding typical motion artifacts found in 4D-CT, and is being further studied for use in GI targeting and motion management. This work supported in part by grant 1R03CA173273-01.« less
An ultrasensitive strain sensor with a wide strain range based on graphene armour scales.
Yang, Yi-Fan; Tao, Lu-Qi; Pang, Yu; Tian, He; Ju, Zhen-Yi; Wu, Xiao-Ming; Yang, Yi; Ren, Tian-Ling
2018-06-12
An ultrasensitive strain sensor with a wide strain range based on graphene armour scales is demonstrated in this paper. The sensor shows an ultra-high gauge factor (GF, up to 1054) and a wide strain range (ε = 26%), both of which present an advantage compared to most other flexible sensors. Moreover, the sensor is developed by a simple fabrication process. Due to the excellent performance, this strain sensor can meet the demands of subtle, large and complex human motion monitoring, which indicates its tremendous application potential in health monitoring, mechanical control, real-time motion monitoring and so on.
Modelling of the Human Knee Joint Supported by Active Orthosis
NASA Astrophysics Data System (ADS)
Musalimov, V.; Monahov, Y.; Tamre, M.; Rõbak, D.; Sivitski, A.; Aryassov, G.; Penkov, I.
2018-02-01
The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC). The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.
A unified probabilistic framework for spontaneous facial action modeling and understanding.
Tong, Yan; Chen, Jixu; Ji, Qiang
2010-02-01
Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the Dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.
An EMG-based robot control scheme robust to time-varying EMG signal features.
Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J
2010-05-01
Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.
Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah
2015-01-01
The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Elasticity of the living abdominal wall in laparoscopic surgery.
Song, Chengli; Alijani, Afshin; Frank, Tim; Hanna, George; Cuschieri, Alfred
2006-01-01
Laparoscopic surgery requires inflation of the abdominal cavity and this offers a unique opportunity to measure the mechanical properties of the living abdominal wall. We used a motion analysis system to study the abdominal wall motion of 18 patients undergoing laparoscopic surgery, and found that the mean Young's modulus was 27.7+/-4.5 and 21.0+/-3.7 kPa for male and female, respectively. During inflation, the abdominal wall changed from a cylinder to a dome shape. The average expansion in the abdominal wall surface was 20%, and a working space of 1.27 x 10(-3)m(3) was created by expansion, reshaping of the abdominal wall and diaphragmatic movement. For the first time, the elasticity of human abdominal wall was obtained from the patients undergoing laparoscopic surgery, and a 3D simulation model of human abdominal wall has been developed to analyse the motion pattern in laparoscopic surgery. Based on this study, a mechanical abdominal wall lift and a surgical simulator for safe/ergonomic port placements are under development.
Centralized Networks to Generate Human Body Motions
Vakulenko, Sergei; Radulescu, Ovidiu; Morozov, Ivan
2017-01-01
We consider continuous-time recurrent neural networks as dynamical models for the simulation of human body motions. These networks consist of a few centers and many satellites connected to them. The centers evolve in time as periodical oscillators with different frequencies. The center states define the satellite neurons’ states by a radial basis function (RBF) network. To simulate different motions, we adjust the parameters of the RBF networks. Our network includes a switching module that allows for turning from one motion to another. Simulations show that this model allows us to simulate complicated motions consisting of many different dynamical primitives. We also use the model for learning human body motion from markers’ trajectories. We find that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that our technique seems to be capable of correcting for missing information resulting from sparse control marker settings. PMID:29240694
Centralized Networks to Generate Human Body Motions.
Vakulenko, Sergei; Radulescu, Ovidiu; Morozov, Ivan; Weber, Andres
2017-12-14
We consider continuous-time recurrent neural networks as dynamical models for the simulation of human body motions. These networks consist of a few centers and many satellites connected to them. The centers evolve in time as periodical oscillators with different frequencies. The center states define the satellite neurons' states by a radial basis function (RBF) network. To simulate different motions, we adjust the parameters of the RBF networks. Our network includes a switching module that allows for turning from one motion to another. Simulations show that this model allows us to simulate complicated motions consisting of many different dynamical primitives. We also use the model for learning human body motion from markers' trajectories. We find that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that our technique seems to be capable of correcting for missing information resulting from sparse control marker settings.
The neurophysiology of biological motion perception in schizophrenia
Jahshan, Carol; Wynn, Jonathan K; Mathis, Kristopher I; Green, Michael F
2015-01-01
Introduction The ability to recognize human biological motion is a fundamental aspect of social cognition that is impaired in people with schizophrenia. However, little is known about the neural substrates of impaired biological motion perception in schizophrenia. In the current study, we assessed event-related potentials (ERPs) to human and nonhuman movement in schizophrenia. Methods Twenty-four subjects with schizophrenia and 18 healthy controls completed a biological motion task while their electroencephalography (EEG) was simultaneously recorded. Subjects watched clips of point-light animations containing 100%, 85%, or 70% biological motion, and were asked to decide whether the clip resembled human or nonhuman movement. Three ERPs were examined: P1, N1, and the late positive potential (LPP). Results Behaviorally, schizophrenia subjects identified significantly fewer stimuli as human movement compared to healthy controls in the 100% and 85% conditions. At the neural level, P1 was reduced in the schizophrenia group but did not differ among conditions in either group. There were no group differences in N1 but both groups had the largest N1 in the 70% condition. There was a condition × group interaction for the LPP: Healthy controls had a larger LPP to 100% versus 85% and 70% biological motion; there was no difference among conditions in schizophrenia subjects. Conclusions Consistent with previous findings, schizophrenia subjects were impaired in their ability to recognize biological motion. The EEG results showed that biological motion did not influence the earliest stage of visual processing (P1). Although schizophrenia subjects showed the same pattern of N1 results relative to healthy controls, they were impaired at a later stage (LPP), reflecting a dysfunction in the identification of human form in biological versus nonbiological motion stimuli. PMID:25722951
Creating stimuli for the study of biological-motion perception.
Dekeyser, Mathias; Verfaillie, Karl; Vanrie, Jan
2002-08-01
In the perception of biological motion, the stimulus information is confined to a small number of lights attached to the major joints of a moving person. Despite this drastic degradation of the stimulus information, the human visual apparatus organizes the swarm of moving dots into a vivid percept of a moving biological creature. Several techniques have been proposed to create point-light stimuli: placing dots at strategic locations on photographs or films, video recording a person with markers attached to the body, computer animation based on artificial synthesis, and computer animation based on motion-capture data. A description is given of the technique we are currently using in our laboratory to produce animated point-light figures. The technique is based on a combination of motion capture and three-dimensional animation software (Character Studio, Autodesk, Inc., 1998). Some of the advantages of our approach are that the same actions can be shown from any viewpoint, that point-light versions, as well as versions with a full-fleshed character, can be created of the same actions, and that point lights can indicate the center of a joint (thereby eliminating several disadvantages associated with other techniques).
Sensing human physiological response using wearable carbon nanotube-based fabrics
NASA Astrophysics Data System (ADS)
Wang, Long; Loh, Kenneth J.; Koo, Helen S.
2016-04-01
Flexible and wearable sensors for human monitoring have received increased attention. Besides detecting motion and physical activity, measuring human vital signals (e.g., respiration rate and body temperature) provide rich data for assessing subjects' physiological or psychological condition. Instead of using conventional, bulky, sensing transducers, the objective of this study was to design and test a wearable, fabric-like sensing system. In particular, multi-walled carbon nanotube (MWCNT)-latex thin films of different MWCNT concentrations were first fabricated using spray coating. Freestanding MWCNT-latex films were then sandwiched between two layers of flexible fabric using iron-on adhesive to form the wearable sensor. Second, to characterize its strain sensing properties, the fabric sensors were subjected to uniaxial and cyclic tensile load tests, and they exhibited relatively stable electromechanical responses. Finally, the wearable sensors were placed on a human subject for monitoring simple motions and for validating their practical strain sensing performance. Overall, the wearable fabric sensor design exhibited advances such as flexibility, ease of fabrication, light weight, low cost, noninvasiveness, and user comfort.
Experimental evaluation of a system for human life detection under debris
NASA Astrophysics Data System (ADS)
Joju, Reshma; Konica, Pimplapure Ramya T.; Alex, Zachariah C.
2017-11-01
It is difficult to for the human beings to be found under debris or behind the walls in case of military applications. Due to which several rescue techniques such as robotic systems, optical devices, and acoustic devices were used. But if victim was unconscious then these rescue system failed. We conducted an experimental analysis on whether the microwaves could detect heart beat and breathing signals of human beings trapped under collapsed debris. For our analysis we used RADAR based on by Doppler shift effect. We calculated the minimum speed that the RADAR could detect. We checked the frequency variation by placing the RADAR at a fixed position and placing the object in motion at different distances. We checked the frequency variation by using objects of different materials as debris behind which the motion was made. The graphs of different analysis were plotted.
Musculoskeletal Simulation Model Generation from MRI Data Sets and Motion Capture Data
NASA Astrophysics Data System (ADS)
Schmid, Jérôme; Sandholm, Anders; Chung, François; Thalmann, Daniel; Delingette, Hervé; Magnenat-Thalmann, Nadia
Today computer models and computer simulations of the musculoskeletal system are widely used to study the mechanisms behind human gait and its disorders. The common way of creating musculoskeletal models is to use a generic musculoskeletal model based on data derived from anatomical and biomechanical studies of cadaverous specimens. To adapt this generic model to a specific subject, the usual approach is to scale it. This scaling has been reported to introduce several errors because it does not always account for subject-specific anatomical differences. As a result, a novel semi-automatic workflow is proposed that creates subject-specific musculoskeletal models from magnetic resonance imaging (MRI) data sets and motion capture data. Based on subject-specific medical data and a model-based automatic segmentation approach, an accurate modeling of the anatomy can be produced while avoiding the scaling operation. This anatomical model coupled with motion capture data, joint kinematics information, and muscle-tendon actuators is finally used to create a subject-specific musculoskeletal model.
Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning
Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron
2015-01-01
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot’s configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy. PMID:26951790
Baykal, Cenk; Torres, Luis G; Alterovitz, Ron
2015-09-28
Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot's behavior and reachable workspace. Optimizing a robot's design by appropriately selecting tube parameters can improve the robot's effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets of concentric tube robot designs that can collectively maximize the reachable percentage of a given goal region in the human body. Our algorithm combines a search in the design space of a concentric tube robot using a global optimization method with a sampling-based motion planner in the robot's configuration space in order to find sets of designs that enable motions to goal regions while avoiding contact with anatomical obstacles. We demonstrate the effectiveness of our algorithm in a simulated scenario based on lung anatomy.
Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong
2014-01-01
Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.
NASA Astrophysics Data System (ADS)
Westfeld, Patrick; Maas, Hans-Gerd; Bringmann, Oliver; Gröllich, Daniel; Schmauder, Martin
2013-11-01
The paper shows techniques for the determination of structured motion parameters from range camera image sequences. The core contribution of the work presented here is the development of an integrated least squares 3D tracking approach based on amplitude and range image sequences to calculate dense 3D motion vector fields. Geometric primitives of a human body model are fitted to time series of range camera point clouds using these vector fields as additional information. Body poses and motion information for individual body parts are derived from the model fit. On the basis of these pose and motion parameters, critical body postures are detected. The primary aim of the study is to automate ergonomic studies for risk assessments regulated by law, identifying harmful movements and awkward body postures in a workplace.
Motion-oriented 3D analysis of body measurements
NASA Astrophysics Data System (ADS)
Loercher, C.; Morlock, S.; Schenk, A.
2017-10-01
The aim of this project is to develop an ergonomically based and motion-oriented size system. New concepts are required in order to be able to deal competently with complex requirements of function-oriented workwear and personal protective equipment (PPE). Body dimensions change through movement, which are basis for motion optimized clothing development. This affects fit and ergonomic comfort. The situation has to be fundamentally researched in order to derive well-founded anthropometric body data, taking into account kinematic requirements of humans and to define functional dimensions for clothing industry. Research focus shall be on ergonomic design of workwear and PPE. There are huge differences in body forms, proportions and muscle manifestations between genders. An improved basic knowledge can be provided as a result, supporting development as well as sales of motion-oriented clothing with perfect fit for garment manufacturers.
A restrained-torque-based motion instructor: forearm flexion/extension-driving exoskeleton
NASA Astrophysics Data System (ADS)
Nishimura, Takuya; Nomura, Yoshihiko; Sakamoto, Ryota
2013-01-01
When learning complicated movements by ourselves, we encounter such problems as a self-rightness. The self-rightness results in a lack of detail and objectivity, and it may cause to miss essences and even twist the essences. Thus, we sometimes fall into the habits of doing inappropriate motions. To solve these problems or to alleviate the problems as could as possible, we have been developed mechanical man-machine human interfaces to support us learning such motions as cultural gestures and sports form. One of the promising interfaces is a wearable exoskeleton mechanical system. As of the first try, we have made a prototype of a 2-link 1-DOF rotational elbow joint interface that is applied for teaching extension-flexion operations with forearms and have found its potential abilities for teaching the initiating and continuing flection motion of the elbow.
A 4DCT imaging-based breathing lung model with relative hysteresis
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-01-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry. PMID:28260811
A 4DCT imaging-based breathing lung model with relative hysteresis
NASA Astrophysics Data System (ADS)
Miyawaki, Shinjiro; Choi, Sanghun; Hoffman, Eric A.; Lin, Ching-Long
2016-12-01
To reproduce realistic airway motion and airflow, the authors developed a deforming lung computational fluid dynamics (CFD) model based on four-dimensional (4D, space and time) dynamic computed tomography (CT) images. A total of 13 time points within controlled tidal volume respiration were used to account for realistic and irregular lung motion in human volunteers. Because of the irregular motion of 4DCT-based airways, we identified an optimal interpolation method for airway surface deformation during respiration, and implemented a computational solid mechanics-based moving mesh algorithm to produce smooth deforming airway mesh. In addition, we developed physiologically realistic airflow boundary conditions for both models based on multiple images and a single image. Furthermore, we examined simplified models based on one or two dynamic or static images. By comparing these simplified models with the model based on 13 dynamic images, we investigated the effects of relative hysteresis of lung structure with respect to lung volume, lung deformation, and imaging methods, i.e., dynamic vs. static scans, on CFD-predicted pressure drop. The effect of imaging method on pressure drop was 24 percentage points due to the differences in airflow distribution and airway geometry.
NASA Astrophysics Data System (ADS)
Kaida, Yukiko; Murakami, Toshiyuki
A wheelchair is an important apparatus of mobility for people with disability. Power-assist motion in an electric wheelchair is to expand the operator's field of activities. This paper describes force sensorless detection of human input torque. Reaction torque estimation observer calculates the total disturbance torque first. Then, the human input torque is extracted from the estimated disturbance. In power-assist motion, assist torque is synthesized according to the product of assist gain and the average torque of the right and left input torque. Finally, the proposed method is verified through the experiments of power-assist motion.
NASA Astrophysics Data System (ADS)
Jeon, S. M.; Jang, G. H.; Choi, H. C.; Park, S. H.; Park, J. O.
2012-04-01
Different magnetic navigation systems (MNSs) have been investigated for the wireless manipulation of microrobots in human blood vessels. Here we propose a MNS and methodology for generation of both the precise helical and translational motions of a microrobot to improve its maneuverability in complex human blood vessel. We then present experiments demonstrating the helical and translational motions of a spiral-type microrobot to verify the proposed MNS.
Evaluating Suit Fit Using Performance Degradation
NASA Technical Reports Server (NTRS)
Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar
2011-01-01
The Mark III suit has multiple sizes of suit components (arm, leg, and gloves) as well as sizing inserts to tailor the fit of the suit to an individual. This study sought to determine a way to identify the point an ideal suit fit transforms into a bad fit and how to quantify this breakdown using mobility-based physical performance data. This study examined the changes in human physical performance via degradation of the elbow and wrist range of motion of the planetary suit prototype (Mark III) with respect to changes in sizing and as well as how to apply that knowledge to suit sizing options and improvements in suit fit. The methods implemented in this study focused on changes in elbow and wrist mobility due to incremental suit sizing modifications. This incremental sizing was within a range that included both optimum and poor fit. Suited range of motion data was collected using a motion analysis system for nine isolated and functional tasks encompassing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm only. The results were then compared across sizing configurations. The results of this study indicate that range of motion may be used as a viable parameter to quantify at what stage suit sizing causes a detriment in performance; however the human performance decrement appeared to be based on the interaction of multiple joints along a limb, not a single joint angle. The study was able to identify a preliminary method to quantify the impact of size on performance and to develop a means to gauge tolerances around optimal size. More work is needed to improve the assessment of optimal fit and to compensate for multiple joint interactions.
Doppler Radar Vital Signs Detection Method Based on Higher Order Cyclostationary.
Yu, Zhibin; Zhao, Duo; Zhang, Zhiqiang
2017-12-26
Due to the non-contact nature, using Doppler radar sensors to detect vital signs such as heart and respiration rates of a human subject is getting more and more attention. However, the related detection-method research meets lots of challenges due to electromagnetic interferences, clutter and random motion interferences. In this paper, a novel third-order cyclic cummulant (TOCC) detection method, which is insensitive to Gaussian interference and non-cyclic signals, is proposed to investigate the heart and respiration rate based on continuous wave Doppler radars. The k -th order cyclostationary properties of the radar signal with hidden periodicities and random motions are analyzed. The third-order cyclostationary detection theory of the heart and respiration rate is studied. Experimental results show that the third-order cyclostationary approach has better estimation accuracy for detecting the vital signs from the received radar signal under low SNR, strong clutter noise and random motion interferences.
Defense on the Move: Ant-Based Cyber Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, Glenn A.; Haack, Jereme N.; McKinnon, Archibald D.
Many common cyber defenses (like firewalls and IDS) are as static as trench warfare allowing the attacker freedom to probe them at will. The concept of Moving Target Defense (MTD) adds dynamism to the defender side, but puts the systems to be defended themselves in motion, potentially at great cost to the defender. An alternative approach is a mobile resilient defense that removes attackers’ ability to rely on prior experience without requiring motion in the protected infrastructure itself. The defensive technology absorbs most of the cost of motion, is resilient to attack, and is unpredictable to attackers. The Ant-Based Cybermore » Defense (ABCD) is a mobile resilient defense providing a set of roaming, bio-inspired, digital-ant agents working with stationary agents in a hierarchy headed by a human supervisor. The ABCD approach provides a resilient, extensible, and flexible defense that can scale to large, multi-enterprise infrastructures like the smart electric grid.« less
Decreased reward value of biological motion among individuals with autistic traits.
Williams, Elin H; Cross, Emily S
2018-02-01
The Social Motivation Theory posits that a reduced sensitivity to the value of social stimuli, specifically faces, can account for social impairments in Autism Spectrum Disorders (ASD). Research has demonstrated that typically developing (TD) individuals preferentially orient towards another type of salient social stimulus, namely biological motion. Individuals with ASD, however, do not show this preference. While the reward value of faces to both TD and ASD individuals has been well-established, the extent to which individuals from these populations also find human motion to be rewarding remains poorly understood. The present study investigated the value assigned to biological motion by TD participants in an effort task, and further examined whether these values differed among individuals with more autistic traits. The results suggest that TD participants value natural human motion more than rigid, machine-like motion or non-human control motion, but this preference is attenuated among individuals reporting more autistic traits. This study provides the first evidence to suggest that individuals with more autistic traits find a broader conceptualisation of social stimuli less rewarding compared to individuals with fewer autistic traits. By quantifying the social reward value of human motion, the present findings contribute an important piece to our understanding of social motivation in individuals with and without social impairments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-01-01
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987
On Inertial Body Tracking in the Presence of Model Calibration Errors
Miezal, Markus; Taetz, Bertram; Bleser, Gabriele
2016-01-01
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266
Development of real-time motion capture system for 3D on-line games linked with virtual character
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck
2004-10-01
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.
Durant, Szonya; Wall, Matthew B; Zanker, Johannes M
2011-09-09
Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Correlation between external and internal respiratory motion: a validation study.
Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim
2012-05-01
In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.
Dove, Erica; Astell, Arlene J
2017-01-11
The number of people living with dementia and mild cognitive impairment (MCI) is increasing substantially. Although there are many research efforts directed toward the prevention and treatment of dementia and MCI, it is also important to learn more about supporting people to live well with dementia or MCI through cognitive, physical, and leisure means. While past research suggests that technology can be used to support positive aging for people with dementia or MCI, the use of motion-based technology has not been thoroughly explored with this population. The aim of this study was to identify and synthesize the current literature involving the use of motion-based technology for people living with dementia or MCI by identifying themes while noting areas requiring further research. A systematic review of studies involving the use of motion-based technology for human participants living with dementia or MCI was conducted. A total of 31 articles met the inclusion criteria. Five questions are addressed concerning (1) context of use; (2) population included (ie, dementia, MCI, or both); (3) hardware and software selection; (4) use of motion-based technology in a group or individual setting; and (5) details about the introduction, teaching, and support methods applied when using the motion-based technology with people living with dementia or MCI. The findings of this review confirm the potential of motion-based technology to improve the lives of people living with dementia or MCI. The use of this technology also spans across several contexts including cognitive, physical, and leisure; all of which support multidimensional well-being. The literature provides evidence that people living with dementia or MCI can learn how to use this technology and that they enjoy doing so. However, there is a lack of information provided in the literature regarding the introduction, training, and support methods applied when using this form of technology with this population. Future research should address the appropriate introduction, teaching, and support required for people living with dementia or MCI to use the motion-based technology. In addition, it is recommended that the diverse needs of these specific end-users be considered in the design and development of this technology. ©Erica Dove, Arlene J Astell. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.01.2017.
Astell, Arlene J
2017-01-01
Background The number of people living with dementia and mild cognitive impairment (MCI) is increasing substantially. Although there are many research efforts directed toward the prevention and treatment of dementia and MCI, it is also important to learn more about supporting people to live well with dementia or MCI through cognitive, physical, and leisure means. While past research suggests that technology can be used to support positive aging for people with dementia or MCI, the use of motion-based technology has not been thoroughly explored with this population. Objective The aim of this study was to identify and synthesize the current literature involving the use of motion-based technology for people living with dementia or MCI by identifying themes while noting areas requiring further research. Methods A systematic review of studies involving the use of motion-based technology for human participants living with dementia or MCI was conducted. Results A total of 31 articles met the inclusion criteria. Five questions are addressed concerning (1) context of use; (2) population included (ie, dementia, MCI, or both); (3) hardware and software selection; (4) use of motion-based technology in a group or individual setting; and (5) details about the introduction, teaching, and support methods applied when using the motion-based technology with people living with dementia or MCI. Conclusions The findings of this review confirm the potential of motion-based technology to improve the lives of people living with dementia or MCI. The use of this technology also spans across several contexts including cognitive, physical, and leisure; all of which support multidimensional well-being. The literature provides evidence that people living with dementia or MCI can learn how to use this technology and that they enjoy doing so. However, there is a lack of information provided in the literature regarding the introduction, training, and support methods applied when using this form of technology with this population. Future research should address the appropriate introduction, teaching, and support required for people living with dementia or MCI to use the motion-based technology. In addition, it is recommended that the diverse needs of these specific end-users be considered in the design and development of this technology. PMID:28077346
Evidence for auditory-visual processing specific to biological motion.
Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F
2012-01-01
Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Subashi, E; Yin, F
Purpose: Current retrospective 4D-MRI provides superior tumor-to-tissue contrast and accurate respiratory motion information for radiotherapy motion management. The developed 4D-MRI techniques based on 2D-MRI image sorting require a high frame-rate of the MR sequences. However, several MRI sequences provide excellent image quality but have low frame-rate. This study aims at developing a novel retrospective 3D k-space sorting 4D-MRI technique using radial k-space acquisition MRI sequences to improve 4D-MRI image quality and temporal-resolution for imaging irregular organ/tumor respiratory motion. Methods: The method is based on a RF-spoiled, steady-state, gradient-recalled sequence with minimal echo time. A 3D radial k-space data acquisition trajectorymore » was used for sampling the datasets. Each radial spoke readout data line starts from the 3D center of Field-of-View. Respiratory signal can be extracted from the k-space center data point of each spoke. The spoke data was sorted based on its self-synchronized respiratory signal using phase sorting. Subsequently, 3D reconstruction was conducted to generate the time-resolved 4D-MRI images. As a feasibility study, this technique was implemented on a digital human phantom XCAT. The respiratory motion was controlled by an irregular motion profile. To validate using k-space center data as a respiratory surrogate, we compared it with the XCAT input controlling breathing profile. Tumor motion trajectories measured on reconstructed 4D-MRI were compared to the average input trajectory. The mean absolute amplitude difference (D) was calculated. Results: The signal extracted from k-space center data matches well with the input controlling respiratory profile of XCAT. The relative amplitude error was 8.6% and the relative phase error was 3.5%. XCAT 4D-MRI demonstrated a clear motion pattern with little serrated artifacts. D of tumor trajectories was 0.21mm, 0.23mm and 0.23mm in SI, AP and ML directions, respectively. Conclusion: A novel retrospective 3D k-space sorting 4D-MRI technique has been developed and evaluated on human digital phantom. NIH (1R21CA165384-01A1)« less
An examination of the degrees of freedom of human jaw motion in speech and mastication.
Ostry, D J; Vatikiotis-Bateson, E; Gribble, P L
1997-12-01
The kinematics of human jaw movements were assessed in terms of the three orientation angles and three positions that characterize the motion of the jaw as a rigid body. The analysis focused on the identification of the jaw's independent movement dimensions, and was based on an examination of jaw motion paths that were plotted in various combinations of linear and angular coordinate frames. Overall, both behaviors were characterized by independent motion in four degrees of freedom. In general, when jaw movements were plotted to show orientation in the sagittal plane as a function of horizontal position, relatively straight paths were observed. In speech, the slopes and intercepts of these paths varied depending on the phonetic material. The vertical position of the jaw was observed to shift up or down so as to displace the overall form of the sagittal plane motion path of the jaw. Yaw movements were small but independent of pitch, and vertical and horizontal position. In mastication, the slope and intercept of the relationship between pitch and horizontal position were affected by the type of food and its size. However, the range of variation was less than that observed in speech. When vertical jaw position was plotted as a function of horizontal position, the basic form of the path of the jaw was maintained but could be shifted vertically. In general, larger bolus diameters were associated with lower jaw positions throughout the movement. The timing of pitch and yaw motion differed. The most common pattern involved changes in pitch angle during jaw opening followed by a phase predominated by lateral motion (yaw). Thus, in both behaviors there was evidence of independent motion in pitch, yaw, horizontal position, and vertical position. This is consistent with the idea that motions in these degrees of freedom are independently controlled.
Recurrence plots and recurrence quantification analysis of human motion data
NASA Astrophysics Data System (ADS)
Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam; Szczesna, Agnieszka; Wojciechowski, Konrad
2016-06-01
The authors present exemplary application of recurrence plots, cross recurrence plots and recurrence quantification analysis for the purpose of exploration of experimental time series describing selected aspects of human motion. Time series were extracted from treadmill gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland by means of the Vicon system. Analysis was focused on the time series representing movements of hip, knee, ankle and wrist joints in the sagittal plane.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
Ride quality evaluation. IV - Models of subjective reaction to aircraft motion
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Richards, L. G.
1978-01-01
The paper examines models of human reaction to the motions typically experienced on short-haul aircraft flights. Data are taken on the regularly scheduled flights of four commercial airlines - three airplanes and one helicopter. The data base consists of: (1) a series of motion recordings distributed over each flight, each including all six degrees of freedom of motion; temperature, pressure, and noise are also recorded; (2) ratings of perceived comfort and satisfaction from the passengers on each flight; (3) moment-by-moment comfort ratings from a test subject assigned to each airplane; and (4) overall comfort ratings for each flight from the test subjects. Regression models are obtained for prediction of rated comfort from rms values for six degrees of freedom of motion. It is shown that the model C = 2.1 + 17.1 T + 17.2 V (T = transverse acceleration, V = vertical acceleration) gives a good fit to the airplane data but is less acceptable for the helicopter data.
Mapping cardiogenic oscillations using synchrotron-based phase contrast CT imaging
NASA Astrophysics Data System (ADS)
Thurgood, Jordan; Dubsky, Stephen; Siu, Karen K. W.; Wallace, Megan; Siew, Melissa; Hooper, Stuart; Fouras, Andreas
2012-10-01
In many animals, including humans, the lungs encase the majority of the heart thus the motion of each organ affects the other. The effects of the motion of the heart on the lungs potentially provides information with regards to both lung and heart health. We present a novel technique that is capable of measuring the effect of the heart on the surrounding lung tissue through the use of advanced synchrotron imaging techniques and recently developed X-ray velocimetry methods. This technique generates 2D frequency response maps of the lung tissue motion at multiple projection angles from projection X-ray images. These frequency response maps are subsequently used to generate 3D reconstructions of the lung tissue exhibiting motion at the frequency of ventilation and the lung tissue exhibiting motion at the frequency of the heart. This technique has a combined spatial and temporal resolution sufficient to observe the dynamic and complex 3D nature of lung-heart interactions.
Schwenke, Michael; Georgii, Joachim; Preusser, Tobias
2017-07-01
Focused ultrasound (FUS) is rapidly gaining clinical acceptance for several target tissues in the human body. Yet, treating liver targets is not clinically applied due to a high complexity of the procedure (noninvasiveness, target motion, complex anatomy, blood cooling effects, shielding by ribs, and limited image-based monitoring). To reduce the complexity, numerical FUS simulations can be utilized for both treatment planning and execution. These use-cases demand highly accurate and computationally efficient simulations. We propose a numerical method for the simulation of abdominal FUS treatments during respiratory motion of the organs and target. Especially, a novel approach is proposed to simulate the heating during motion by solving Pennes' bioheat equation in a computational reference space, i.e., the equation is mathematically transformed to the reference. The approach allows for motion discontinuities, e.g., the sliding of the liver along the abdominal wall. Implementing the solver completely on the graphics processing unit and combining it with an atlas-based ultrasound simulation approach yields a simulation performance faster than real time (less than 50-s computing time for 100 s of treatment time) on a modern off-the-shelf laptop. The simulation method is incorporated into a treatment planning demonstration application that allows to simulate real patient cases including respiratory motion. The high performance of the presented simulation method opens the door to clinical applications. The methods bear the potential to enable the application of FUS for moving organs.
Yuan, Hongwei; Poeggel, Sven; Newe, Thomas; Lewis, Elfed; Viphavakit, Charusluk; Leen, Gabriel
2017-03-10
A comprehensive study of the effect of a wide range of controlled human subject motion on Photoplethysmographic signals is reported. The investigation includes testing of two separate groups of 5 and 18 subjects who were asked to undertake set exercises whilst simultaneously monitoring a wide range of physiological parameters including Breathing Rate, Heart Rate and Localised Blood Pressure using commercial clinical sensing systems. The unique finger mounted PPG probe equipped with miniature three axis accelerometers for undertaking this investigation was a purpose built in-house version which is designed to facilitate reproducible application to a wide range of human subjects and the study of motion. The subjects were required to undertake several motion based exercises including standing, sitting and lying down and transitions between these states. They were also required to undertake set arm movements including arm-swinging and wrist rotation. A comprehensive set of experimental results corresponding to all motion inducing exercises have been recorded and analysed including the baseline (BL) value (DC component) and the amplitude of the oscillation of the PPG. All physiological parameters were also recorded as a simultaneous time varying waveform. The effects of the motion and specifically the localised Blood Pressure (BP) have been studied and related to possible influences of the Autonomic Nervous System (ANS) and hemodynamic pressure variations. It is envisaged that a comprehensive study of the effect of motion and the localised pressure fluctuations will provide valuable information for the future minimisation of motion artefact effect on the PPG signals of this probe and allow the accurate assessment of total haemoglobin concentration which is the primary function of the probe.
Neural representations of kinematic laws of motion: evidence for action-perception coupling.
Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar
2007-12-18
Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.
1990-02-07
performance assessment, human intervention, or operator training. Algorithms on different levels are allowed to deal with the world with different degrees...have on the decisions made by the driver are a complex combination of human factors, driving experience, mission objectives, tactics, etc., and...motion. The distinction here is that the decision making program may I 12 1 I not necessarily make its decisions based on the same factors as the human
Human error identification for laparoscopic surgery: Development of a motion economy perspective.
Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong
2015-09-01
This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Total knee replacement with natural rollback.
Wachowski, Martin Michael; Walde, Tim Alexander; Balcarek, Peter; Schüttrumpf, Jan Philipp; Frosch, Stephan; Stauffenberg, Caspar; Frosch, Karl-Heinz; Fiedler, Christoph; Fanghänel, Jochen; Kubein-Meesenburg, Dietmar; Nägerl, Hans
2012-03-20
A novel class of total knee replacement (AEQUOS G1) is introduced which features a unique design of the articular surfaces. Based on the anatomy of the human knee and differing from all other prostheses, the lateral tibial "plateau" is convexly curved and the lateral femoral condyle is posteriorly shifted in relation to the medial femoral condyle. Under compressive forces the configuration of the articular surfaces of human knees constrains the relative motion of femur and tibia in flexion/extension. This constrained motion is equivalent to that of a four-bar linkage, the virtual 4 pivots of which are given by the centres of curvature of the articulating surfaces. The dimensions of the four-bar linkage were optimized to the effect that constrained motion of the total knee replacement (TKR) follows the flexional motion of the human knee in close approximation, particularly during gait. In pilot studies lateral X-ray pictures have demonstrated that AEQUOS G1 can feature the natural rollback in vivo. Rollback relieves the load of the patello-femoral joint and minimizes retropatellar pressure. This mechanism should reduce the prevalence of anterior knee pain. The articulating surfaces roll predominantly in the stance phase. Consequently sliding friction is replaced by the lesser rolling friction under load. Producing rollback should minimize material wear due to friction and maximize the lifetime of the prosthesis. To definitely confirm these theses one has to wait for the long term results. Copyright © 2011 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Wagner, Martin G.; Laeseke, Paul F.; Schubert, Tilman; Slagowski, Jordan M.; Speidel, Michael A.; Mistretta, Charles A.
2017-03-01
Fluoroscopic image guidance for minimally invasive procedures in the thorax and abdomen suffers from respiratory and cardiac motion, which can cause severe subtraction artifacts and inaccurate image guidance. This work proposes novel techniques for respiratory motion tracking in native fluoroscopic images as well as a model based estimation of vessel deformation. This would allow compensation for respiratory motion during the procedure and therefore simplify the workflow for minimally invasive procedures such as liver embolization. The method first establishes dynamic motion models for both the contrast-enhanced vasculature and curvilinear background features based on a native (non-contrast) and a contrast-enhanced image sequence acquired prior to device manipulation, under free breathing conditions. The model of vascular motion is generated by applying the diffeomorphic demons algorithm to an automatic segmentation of the subtraction sequence. The model of curvilinear background features is based on feature tracking in the native sequence. The two models establish the relationship between the respiratory state, which is inferred from curvilinear background features, and the vascular morphology during that same respiratory state. During subsequent fluoroscopy, curvilinear feature detection is applied to determine the appropriate vessel mask to display. The result is a dynamic motioncompensated vessel mask superimposed on the fluoroscopic image. Quantitative evaluation of the proposed methods was performed using a digital 4D CT-phantom (XCAT), which provides realistic human anatomy including sophisticated respiratory and cardiac motion models. Four groups of datasets were generated, where different parameters (cycle length, maximum diaphragm motion and maximum chest expansion) were modified within each image sequence. Each group contains 4 datasets consisting of the initial native and contrast enhanced sequences as well as a sequence, where the respiratory motion is tracked. The respiratory motion tracking error was between 1.00 % and 1.09 %. The estimated dynamic vessel masks yielded a Sørensen-Dice coefficient between 0.94 and 0.96. Finally, the accuracy of the vessel contours was measured in terms of the 99th percentile of the error, which ranged between 0.64 and 0.96 mm. The presented results show that the approach is feasible for respiratory motion tracking and compensation and could therefore considerably improve the workflow of minimally invasive procedures in the thorax and abdomen
Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.
Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan
2018-04-01
Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.
Hayakawa, Tomohiro; Kunihiro, Takeshi; Ando, Tomoko; Kobayashi, Seiji; Matsui, Eriko; Yada, Hiroaki; Kanda, Yasunari; Kurokawa, Junko; Furukawa, Tetsushi
2014-12-01
In this study, we used high-speed video microscopy with motion vector analysis to investigate the contractile characteristics of hiPS-CM monolayer, in addition to further characterizing the motion with extracellular field potential (FP), traction force and the Ca(2+) transient. Results of our traction force microscopy demonstrated that the force development of hiPS-CMs correlated well with the cellular deformation detected by the video microscopy with motion vector analysis. In the presence of verapamil and isoproterenol, contractile motion of hiPS-CMs showed alteration in accordance with the changes in fluorescence peak of the Ca(2+) transient, i.e., upstroke, decay, amplitude and full-width at half-maximum. Simultaneously recorded hiPS-CM motion and FP showed that there was a linear correlation between changes in the motion and field potential duration in response to verapamil (30-150nM), isoproterenol (0.1-10μM) and E-4031 (10-50nM). In addition, tetrodotoxin (3-30μM)-induced delay of sodium current was corresponded with the delay of the contraction onset of hiPS-CMs. These results indicate that the electrophysiological and functional behaviors of hiPS-CMs are quantitatively reflected in the contractile motion detected by this image-based technique. In the presence of 100nM E-4031, the occurrence of early after-depolarization-like negative deflection in FP was also detected in the hiPS-CM motion as a characteristic two-step relaxation pattern. These findings offer insights into the interpretation of the motion kinetics of the hiPS-CMs, and are relevant for understanding electrical and mechanical relationship in hiPS-CMs. Copyright © 2014. Published by Elsevier Ltd.
Understanding Human Motion Skill with Peak Timing Synergy
NASA Astrophysics Data System (ADS)
Ueno, Ken; Furukawa, Koichi
The careful observation of motion phenomena is important in understanding the skillful human motion. However, this is a difficult task due to the complexities in timing when dealing with the skilful control of anatomical structures. To investigate the dexterity of human motion, we decided to concentrate on timing with respect to motion, and we have proposed a method to extract the peak timing synergy from multivariate motion data. The peak timing synergy is defined as a frequent ordered graph with time stamps, which has nodes consisting of turning points in motion waveforms. A proposed algorithm, PRESTO automatically extracts the peak timing synergy. PRESTO comprises the following 3 processes: (1) detecting peak sequences with polygonal approximation; (2) generating peak-event sequences; and (3) finding frequent peak-event sequences using a sequential pattern mining method, generalized sequential patterns (GSP). Here, we measured right arm motion during the task of cello bowing and prepared a data set of the right shoulder and arm motion. We successfully extracted the peak timing synergy on cello bowing data set using the PRESTO algorithm, which consisted of common skills among cellists and personal skill differences. To evaluate the sequential pattern mining algorithm GSP in PRESTO, we compared the peak timing synergy by using GSP algorithm and the one by using filtering by reciprocal voting (FRV) algorithm as a non time-series method. We found that the support is 95 - 100% in GSP, while 83 - 96% in FRV and that the results by GSP are better than the one by FRV in the reproducibility of human motion. Therefore we show that sequential pattern mining approach is more effective to extract the peak timing synergy than non-time series analysis approach.
Chen, Weihai; Cui, Xiang; Zhang, Jianbin; Wang, Jianhua
2015-06-01
Rehabilitation technologies have great potentials in assisted motion training for stroke patients. Considering that wrist motion plays an important role in arm dexterous manipulation of activities of daily living, this paper focuses on developing a cable-driven wrist robotic rehabilitator (CDWRR) for motion training or assistance to subjects with motor disabilities. The CDWRR utilizes the wrist skeletal joints and arm segments as the supporting structure and takes advantage of cable-driven parallel design to build the system, which brings the properties of flexibility, low-cost, and low-weight. The controller of the CDWRR is designed typically based on a virtual torque-field, which is to plan "assist-as-needed" torques for the spherical motion of wrist responding to the orientation deviation in wrist motion training. The torque-field controller can be customized to different levels of rehabilitation training requirements by tuning the field parameters. Additionally, a rapidly convergent parameter self-identification algorithm is developed to obtain the uncertain parameters automatically for the floating wearable structure of the CDWRR. Finally, experiments on a healthy subject are carried out to demonstrate the performance of the controller and the feasibility of the CDWRR on wrist motion training or assistance.
NASA Astrophysics Data System (ADS)
Chen, Weihai; Cui, Xiang; Zhang, Jianbin; Wang, Jianhua
2015-06-01
Rehabilitation technologies have great potentials in assisted motion training for stroke patients. Considering that wrist motion plays an important role in arm dexterous manipulation of activities of daily living, this paper focuses on developing a cable-driven wrist robotic rehabilitator (CDWRR) for motion training or assistance to subjects with motor disabilities. The CDWRR utilizes the wrist skeletal joints and arm segments as the supporting structure and takes advantage of cable-driven parallel design to build the system, which brings the properties of flexibility, low-cost, and low-weight. The controller of the CDWRR is designed typically based on a virtual torque-field, which is to plan "assist-as-needed" torques for the spherical motion of wrist responding to the orientation deviation in wrist motion training. The torque-field controller can be customized to different levels of rehabilitation training requirements by tuning the field parameters. Additionally, a rapidly convergent parameter self-identification algorithm is developed to obtain the uncertain parameters automatically for the floating wearable structure of the CDWRR. Finally, experiments on a healthy subject are carried out to demonstrate the performance of the controller and the feasibility of the CDWRR on wrist motion training or assistance.
Victim Simulator for Victim Detection Radar
NASA Technical Reports Server (NTRS)
Lux, James P.; Haque, Salman
2013-01-01
Testing of victim detection radars has traditionally used human subjects who volunteer to be buried in, or climb into a space within, a rubble pile. This is not only uncomfortable, but can be hazardous or impractical when typical disaster scenarios are considered, including fire, mud, or liquid waste. Human subjects are also inconsistent from day to day (i.e., they do not have the same radar properties), so quantitative performance testing is difficult. Finally, testing a multiple-victim scenario is difficult and expensive because of the need for multiple human subjects who must all be coordinated. The solution is an anthropomorphic dummy with dielectric properties that replicate those of a human, and that has motions comparable to human motions for breathing and heartbeat. Two airfilled bladders filled and drained by solenoid valves provide the underlying motion for vinyl bags filled with a dielectric gel with realistic properties. The entire assembly is contained within a neoprene wetsuit serving as a "skin." The solenoids are controlled by a microcontroller, which can generate a variety of heart and breathing patterns, as well as being reprogrammable for more complex activities. Previous electromagnetic simulators or RF phantoms have been oriented towards assessing RF safety, e.g., the measurement of specific absorption rate (SAR) from a cell phone signal, or to provide a calibration target for diagnostic techniques (e.g., MRI). They are optimized for precise dielectric performance, and are typically rigid and immovable. This device is movable and "positionable," and has motion that replicates the small-scale motion of humans. It is soft (much as human tissue is) and has programmable motions.
Foot-mounted inertial measurement unit for activity classification.
Ghobadi, Mostafa; Esfahani, Ehsan T
2014-01-01
This paper proposes a classification technique for daily base activity recognition for human monitoring during physical therapy in home. The proposed method estimates the foot motion using single inertial measurement unit, then segments the motion into steps classify them by template-matching as walking, stairs up or stairs down steps. The results show a high accuracy of activity recognition. Unlike previous works which are limited to activity recognition, the proposed approach is more qualitative by providing similarity index of any activity to its desired template which can be used to assess subjects improvement.
NASA Technical Reports Server (NTRS)
Young, L. R.; Oman, C. M.; Curry, R. E.
1977-01-01
Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.
Ida, Hirofumi; Fukuhara, Kazunobu; Kusubori, Seiji; Ishii, Motonobu
2011-09-01
Computer graphics of digital human models can be used to display human motions as visual stimuli. This study presents our technique for manipulating human motion with a forward kinematics calculation without violating anatomical constraints. A motion modulation of the upper extremity was conducted by proportionally modulating the anatomical joint angular velocity calculated by motion analysis. The effect of this manipulation was examined in a tennis situation--that is, the receiver's performance of predicting ball direction when viewing a digital model of the server's motion derived by modulating the angular velocities of the forearm or that of the elbow during the forward swing. The results showed that the faster the server's forearm pronated, the more the receiver's anticipation of the ball direction tended to the left side of the serve box. In contrast, the faster the server's elbow extended, the more the receiver's anticipation of the ball direction tended to the right. This suggests that tennis players are sensitive to the motion modulation of their opponent's racket-arm.
“What Women Like”: Influence of Motion and Form on Esthetic Body Perception
Cazzato, Valentina; Siega, Serena; Urgesi, Cosimo
2012-01-01
Several studies have shown the distinct contribution of motion and form to the esthetic evaluation of female bodies. Here, we investigated how variations of implied motion and body size interact in the esthetic evaluation of female and male bodies in a sample of young healthy women. Participants provided attractiveness, beauty, and liking ratings for the shape and posture of virtual renderings of human bodies with variable body size and implied motion. The esthetic judgments for both shape and posture of human models were influenced by body size and implied motion, with a preference for thinner and more dynamic stimuli. Implied motion, however, attenuated the impact of extreme body size on the esthetic evaluation of body postures, while body size variations did not affect the preference for more dynamic stimuli. Results show that body form and action cues interact in esthetic perception, but the final esthetic appreciation of human bodies is predicted by a mixture of perceptual and affective evaluative components. PMID:22866044
Design and analysis of an underactuated anthropomorphic finger for upper limb prosthetics.
Omarkulov, Nurdos; Telegenov, Kuat; Zeinullin, Maralbek; Begalinova, Ainur; Shintemirov, Almas
2015-01-01
This paper presents the design of a linkage based finger mechanism ensuring extended range of anthropomorphic gripping motions. The finger design is done using a path-point generation method based on geometrical dimensions and motion of a typical index human finger. Following the design description, and its kinematics analysis, the experimental evaluation of the finger gripping performance is presented using the finger 3D printed prototype. The finger underactuation is achieved by utilizing mechanical linkage system, consisting of two crossed four-bar linkage mechanisms. It is shown that the proposed finger design can be used to design a five-fingered anthropomorphic hand and has the potential for upper limb prostheses development.
NASA Technical Reports Server (NTRS)
Lee, Mun Wai
2015-01-01
Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.
Comparison of Flight Simulators Based on Human Motion Perception Metrics
NASA Technical Reports Server (NTRS)
Valente Pais, Ana R.; Correia Gracio, Bruno J.; Kelly, Lon C.; Houck, Jacob A.
2015-01-01
In flight simulation, motion filters are used to transform aircraft motion into simulator motion. When looking for the best match between visual and inertial amplitude in a simulator, researchers have found that there is a range of inertial amplitudes, rather than a single inertial value, that is perceived by subjects as optimal. This zone, hereafter referred to as the optimal zone, seems to correlate to the perceptual coherence zones measured in flight simulators. However, no studies were found in which these two zones were compared. This study investigates the relation between the optimal and the coherence zone measurements within and between different simulators. Results show that for the sway axis, the optimal zone lies within the lower part of the coherence zone. In addition, it was found that, whereas the width of the coherence zone depends on the visual amplitude and frequency, the width of the optimal zone remains constant.
Hierarchical information fusion for global displacement estimation in microsensor motion capture.
Meng, Xiaoli; Zhang, Zhi-Qiang; Wu, Jian-Kang; Wong, Wai-Choong
2013-07-01
This paper presents a novel hierarchical information fusion algorithm to obtain human global displacement for different gait patterns, including walking, running, and hopping based on seven body-worn inertial and magnetic measurement units. In the first-level sensor fusion, the orientation for each segment is achieved by a complementary Kalman filter (CKF) which compensates for the orientation error of the inertial navigation system solution through its error state vector. For each foot segment, the displacement is also estimated by the CKF, and zero velocity update is included for the drift reduction in foot displacement estimation. Based on the segment orientations and left/right foot locations, two global displacement estimates can be acquired from left/right lower limb separately using a linked biomechanical model. In the second-level geometric fusion, another Kalman filter is deployed to compensate for the difference between the two estimates from the sensor fusion and get more accurate overall global displacement estimation. The updated global displacement will be transmitted to left/right foot based on the human lower biomechanical model to restrict the drifts in both feet displacements. The experimental results have shown that our proposed method can accurately estimate human locomotion for the three different gait patterns with regard to the optical motion tracker.
Sabatini, Angelo Maria; Genovese, Vincenzo
2014-07-24
A sensor fusion method was developed for vertical channel stabilization by fusing inertial measurements from an Inertial Measurement Unit (IMU) and pressure altitude measurements from a barometric altimeter integrated in the same device (baro-IMU). An Extended Kalman Filter (EKF) estimated the quaternion from the sensor frame to the navigation frame; the sensed specific force was rotated into the navigation frame and compensated for gravity, yielding the vertical linear acceleration; finally, a complementary filter driven by the vertical linear acceleration and the measured pressure altitude produced estimates of height and vertical velocity. A method was also developed to condition the measured pressure altitude using a whitening filter, which helped to remove the short-term correlation due to environment-dependent pressure changes from raw pressure altitude. The sensor fusion method was implemented to work on-line using data from a wireless baro-IMU and tested for the capability of tracking low-frequency small-amplitude vertical human-like motions that can be critical for stand-alone inertial sensor measurements. Validation tests were performed in different experimental conditions, namely no motion, free-fall motion, forced circular motion and squatting. Accurate on-line tracking of height and vertical velocity was achieved, giving confidence to the use of the sensor fusion method for tracking typical vertical human motions: velocity Root Mean Square Error (RMSE) was in the range 0.04-0.24 m/s; height RMSE was in the range 5-68 cm, with statistically significant performance gains when the whitening filter was used by the sensor fusion method to track relatively high-frequency vertical motions.
The responsiveness of biological motion processing areas to selective attention towards goals.
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-10-15
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. Copyright © 2012 Elsevier Inc. All rights reserved.
Oguz, Ozgur S; Zhou, Zhehua; Glasauer, Stefan; Wollherr, Dirk
2018-04-03
Human motor control is highly efficient in generating accurate and appropriate motor behavior for a multitude of tasks. This paper examines how kinematic and dynamic properties of the musculoskeletal system are controlled to achieve such efficiency. Even though recent studies have shown that the human motor control relies on multiple models, how the central nervous system (CNS) controls this combination is not fully addressed. In this study, we utilize an Inverse Optimal Control (IOC) framework in order to find the combination of those internal models and how this combination changes for different reaching tasks. We conducted an experiment where participants executed a comprehensive set of free-space reaching motions. The results show that there is a trade-off between kinematics and dynamics based controllers depending on the reaching task. In addition, this trade-off depends on the initial and final arm configurations, which in turn affect the musculoskeletal load to be controlled. Given this insight, we further provide a discomfort metric to demonstrate its influence on the contribution of different inverse internal models. This formulation together with our analysis not only support the multiple internal models (MIMs) hypothesis but also suggest a hierarchical framework for the control of human reaching motions by the CNS.
NASA Technical Reports Server (NTRS)
Zaychik, Kirill B.; Cardullo, Frank M.
2012-01-01
Results have been obtained using conventional techniques to model the generic human operator?s control behavior, however little research has been done to identify an individual based on control behavior. The hypothesis investigated is that different operators exhibit different control behavior when performing a given control task. Two enhancements to existing human operator models, which allow personalization of the modeled control behavior, are presented. One enhancement accounts for the testing control signals, which are introduced by an operator for more accurate control of the system and/or to adjust the control strategy. This uses the Artificial Neural Network which can be fine-tuned to model the testing control. Another enhancement takes the form of an equiripple filter which conditions the control system power spectrum. A novel automated parameter identification technique was developed to facilitate the identification process of the parameters of the selected models. This utilizes a Genetic Algorithm based optimization engine called the Bit-Climbing Algorithm. Enhancements were validated using experimental data obtained from three different sources: the Manual Control Laboratory software experiments, Unmanned Aerial Vehicle simulation, and NASA Langley Research Center Visual Motion Simulator studies. This manuscript also addresses applying human operator models to evaluate the effectiveness of motion feedback when simulating actual pilot control behavior in a flight simulator.
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Motions. 79.28 Section 79.28 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION PROGRAM FRAUD CIVIL REMEDIES § 79.28 Motions. (a) Any application to the ALJ for an order or ruling shall be by motion. Motions shall state the...
Synthesis of Speaker Facial Movement to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.
1994-01-01
A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.
Alternative Control Technologies: Human Factors Issues
1998-10-01
that instant. This removes the workload associated and, over a long period, apply painful pressure to the face. with having to remember which words...shown that phonetically-relevant orofacial motions can be estimated from the underlying EMG activity. 4.4. EMG-BASED CONTROL APPLICATION EXAMPLES 30
Sul, Onejae; Lee, Seung-Beck
2017-01-01
In this article, we report on a flexible sensor based on a sandpaper molded elastomer that simultaneously detects planar displacement, rotation angle, and vertical contact pressure. When displacement, rotation, and contact pressure are applied, the contact area between the translating top elastomer electrode and the stationary three bottom electrodes change characteristically depending on the movement, making it possible to distinguish between them. The sandpaper molded undulating surface of the elastomer reduces friction at the contact allowing the sensor not to affect the movement during measurement. The sensor showed a 0.25 mm−1 displacement sensitivity with a ±33 μm accuracy, a 0.027 degree−1 of rotation sensitivity with ~0.95 degree accuracy, and a 4.96 kP−1 of pressure sensitivity. For possible application to joint movement detection, we demonstrated that our sensor effectively detected the up-and-down motion of a human forefinger and the bending and straightening motion of a human arm. PMID:28878166
Choi, Eunsuk; Sul, Onejae; Lee, Seung-Beck
2017-09-06
In this article, we report on a flexible sensor based on a sandpaper molded elastomer that simultaneously detects planar displacement, rotation angle, and vertical contact pressure. When displacement, rotation, and contact pressure are applied, the contact area between the translating top elastomer electrode and the stationary three bottom electrodes change characteristically depending on the movement, making it possible to distinguish between them. The sandpaper molded undulating surface of the elastomer reduces friction at the contact allowing the sensor not to affect the movement during measurement. The sensor showed a 0.25 mm −1 displacement sensitivity with a ±33 μm accuracy, a 0.027 degree −1 of rotation sensitivity with ~0.95 degree accuracy, and a 4.96 kP −1 of pressure sensitivity. For possible application to joint movement detection, we demonstrated that our sensor effectively detected the up-and-down motion of a human forefinger and the bending and straightening motion of a human arm.
Sociability modifies dogs' sensitivity to biological motion of different social relevance.
Ishikawa, Yuko; Mills, Daniel; Willmott, Alexander; Mullineaux, David; Guo, Kun
2018-03-01
Preferential attention to living creatures is believed to be an intrinsic capacity of the visual system of several species, with perception of biological motion often studied and, in humans, it correlates with social cognitive performance. Although domestic dogs are exceptionally attentive to human social cues, it is unknown whether their sociability is associated with sensitivity to conspecific and heterospecific biological motion cues of different social relevance. We recorded video clips of point-light displays depicting a human or dog walking in either frontal or lateral view. In a preferential looking paradigm, dogs spontaneously viewed 16 paired point-light displays showing combinations of normal/inverted (control condition), human/dog and frontal/lateral views. Overall, dogs looked significantly longer at frontal human point-light display versus the inverted control, probably due to its clearer social/biological relevance. Dogs' sociability, assessed through owner-completed questionnaires, further revealed that low-sociability dogs preferred the lateral point-light display view, whereas high-sociability dogs preferred the frontal view. Clearly, dogs can recognize biological motion, but their preference is influenced by their sociability and the stimulus salience, implying biological motion perception may reflect aspects of dogs' social cognition.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Feature-Based Attention in Early Vision for the Modulation of Figure–Ground Segregation
Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko
2013-01-01
We investigated psychophysically whether feature-based attention modulates the perception of figure–ground (F–G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1–V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F–G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object. PMID:23515841
Feature-based attention in early vision for the modulation of figure-ground segregation.
Wagatsuma, Nobuhiko; Oki, Megumi; Sakai, Ko
2013-01-01
We investigated psychophysically whether feature-based attention modulates the perception of figure-ground (F-G) segregation and, based on the results, we investigated computationally the neural mechanisms underlying attention modulation. In the psychophysical experiments, the attention of participants was drawn to a specific motion direction and they were then asked to judge the side of figure in an ambiguous figure with surfaces consisting of distinct motion directions. The results of these experiments showed that the surface consisting of the attended direction of motion was more frequently observed as figure, with a degree comparable to that of spatial attention (Wagatsuma et al., 2008). These experiments also showed that perception was dependent on the distribution of feature contrast, specifically the motion direction differences. These results led us to hypothesize that feature-based attention functions in a framework similar to that of spatial attention. We proposed a V1-V2 model in which feature-based attention modulates the contrast of low-level feature in V1, and this modulation of contrast changes directly the surround modulation of border-ownership-selective cells in V2; thus, perception of F-G is biased. The model exhibited good agreement with human perception in the magnitude of attention modulation and its invariance among stimuli. These results indicate that early-level features that are modified by feature-based attention alter subsequent processing along afferent pathway, and that such modification could even change the perception of object.
Barmpoutis, Angelos; Alzate, Jose; Beekhuizen, Samantha; Delgado, Horacio; Donaldson, Preston; Hall, Andrew; Lago, Charlie; Vidal, Kevin; Fox, Emily J
2016-01-01
In this paper a prototype system is presented for home-based physical tele-therapy using a wearable device for haptic feedback. The haptic feedback is generated as a sequence of vibratory cues from 8 vibrator motors equally spaced along an elastic wearable band. The motors guide the patients' movement as they perform a prescribed exercise routine in a way that replaces the physical therapists' haptic guidance in an unsupervised or remotely supervised home-based therapy session. A pilot study of 25 human subjects was performed that focused on: a) testing the capability of the system to guide the users in arbitrary motion paths in the space and b) comparing the motion of the users during typical physical therapy exercises with and without haptic-based guidance. The results demonstrate the efficacy of the proposed system.
Training industrial robots with gesture recognition techniques
NASA Astrophysics Data System (ADS)
Piane, Jennifer; Raicu, Daniela; Furst, Jacob
2013-01-01
In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.
NASA Technical Reports Server (NTRS)
Jackson, Mariea Dunn; Dischinger, Charles; Stambolian, Damon; Henderson, Gena
2012-01-01
Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a Primitive motion capture library. The Library will be used by the human factors engineering in the future to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the Primitive models are being developed for the library the project has selected several current human factors issues to be addressed for the SLS and Orion launch systems. This paper explains how the Motion Capture of unique ground systems activities are being used to verify the human factors analysis requirements for ground system used to process the STS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.
Postures and Motions Library Development for Verification of Ground Crew Human Factors Requirements
NASA Technical Reports Server (NTRS)
Stambolian, Damon; Henderson, Gena; Jackson, Mariea Dunn; Dischinger, Charles
2013-01-01
Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a primitive motion capture library. The library will be used by human factors engineering analysts to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the primitive models are being developed for the library, the project has selected several current human factors issues to be addressed for the Space Launch System (SLS) and Orion launch systems. This paper explains how the motion capture of unique ground systems activities is being used to verify the human factors engineering requirements for ground systems used to process the SLS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.
Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring
Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen
2018-01-01
Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851
Control of joint motion simulators for biomechanical research
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1992-01-01
The authors present a hierarchical adaptive algorithm for controlling upper extremity human joint motion simulators. A joint motion simulator is a computer-controlled, electromechanical system which permits the application of forces to the tendons of a human cadaver specimen in such a way that the cadaver joint under study achieves a desired motion in a physiologic manner. The proposed control scheme does not require knowledge of the cadaver specimen dynamic model, and solves on-line the indeterminate problem which arises because human joints typically possess more actuators than degrees of freedom. Computer simulation results are given for an elbow/forearm system and wrist/hand system under hierarchical control. The results demonstrate that any desired normal joint motion can be accurately tracked with the proposed algorithm. These simulation results indicate that the controller resolved the indeterminate problem redundancy in a physiologic manner, and show that the control scheme was robust to parameter uncertainty and to sensor noise.
Cai, Feng; Yi, Changrui; Liu, Shichang; Wang, Yan; Liu, Lacheng; Liu, Xiaoqing; Xu, Xuming; Wang, Li
2016-03-15
Flexible sensors have attracted more and more attention as a fundamental part of anthropomorphic robot research, medical diagnosis and physical health monitoring. Here, we constructed an ultrasensitive and passive flexible sensor with the advantages of low cost, lightness and wearability, electric safety and reliability. The fundamental mechanism of the sensor is based on triboelectric effect inducing electrostatic charges on the surfaces between two different materials. Just like a plate capacitor, current will be generated while the distance or size of the parallel capacitors changes caused by the small mechanical disturbance upon it and therefore the output current/voltage will be produced. Typically, the passive sensor unambiguously monitors muscle motions including hand motion from stretch-clench-stretch, mouth motion from open-bite-open, blink and respiration. Moreover, this sensor records the details of the consecutive phases in a cardiac cycle of the apex cardiogram, and identify the peaks including percussion wave, tidal wave and diastolic wave of the radial pulse wave. To record subtle human physiological signals including radial pulsilogram and apex cardiogram with excellent signal/noise ratio, stability and reproducibility, the sensor shows great potential in the applications of medical diagnosis and daily health monitoring. Copyright © 2015 Elsevier B.V. All rights reserved.
Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.
Palmer, Stephen E; Langlois, Thomas A
2017-07-01
Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.
Motion coherence affects human perception and pursuit similarly.
Beutter, B R; Stone, L S
2000-01-01
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.
Motion coherence affects human perception and pursuit similarly
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
2000-01-01
Pursuit and perception both require accurate information about the motion of objects. Recovering the motion of objects by integrating the motion of their components is a difficult visual task. Successful integration produces coherent global object motion, while a failure to integrate leaves the incoherent local motions of the components unlinked. We compared the ability of perception and pursuit to perform motion integration by measuring direction judgments and the concomitant eye-movement responses to line-figure parallelograms moving behind stationary rectangular apertures. The apertures were constructed such that only the line segments corresponding to the parallelogram's sides were visible; thus, recovering global motion required the integration of the local segment motion. We investigated several potential motion-integration rules by using stimuli with different object, vector-average, and line-segment terminator-motion directions. We used an oculometric decision rule to directly compare direction discrimination for pursuit and perception. For visible apertures, the percept was a coherent object, and both the pursuit and perceptual performance were close to the object-motion prediction. For invisible apertures, the percept was incoherently moving segments, and both the pursuit and perceptual performance were close to the terminator-motion prediction. Furthermore, both psychometric and oculometric direction thresholds were much higher for invisible apertures than for visible apertures. We constructed a model in which both perception and pursuit are driven by a shared motion-processing stage, with perception having an additional input from an independent static-processing stage. Model simulations were consistent with our perceptual and oculomotor data. Based on these results, we propose the use of pursuit as an objective and continuous measure of perceptual coherence. Our results support the view that pursuit and perception share a common motion-integration stage, perhaps within areas MT or MST.
NASA Astrophysics Data System (ADS)
Guo, Xiaohui; Huang, Ying; Zhao, Yunong; Mao, Leidong; Gao, Le; Pan, Weidong; Zhang, Yugang; Liu, Ping
2017-09-01
Flexible, stretchable, and wearable strain sensors have attracted significant attention for their potential applications in human movement detection and recognition. Here, we report a highly stretchable and flexible strain sensor based on a single-walled carbon nanotube (SWCNTs)/carbon black (CB) synergistic conductive network. The fabrication, synergistic conductive mechanism, and characterization of the sandwich-structured strain sensor were investigated. The experimental results show that the device exhibits high stretchability (120%), excellent flexibility, fast response (˜60 ms), temperature independence, and superior stability and reproducibility during ˜1100 stretching/releasing cycles. Furthermore, human activities such as the bending of a finger or elbow and gestures were monitored and recognized based on the strain sensor, indicating that the stretchable strain sensor based on the SWCNTs/CB synergistic conductive network could have promising applications in flexible and wearable devices for human motion monitoring.
Emergent Structural Mechanisms for High-Density Collective Motion Inspired by Human Crowds
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Sumpter, David T. J.; Silverberg, Jesse L.
2016-11-01
Collective motion of large human crowds often depends on their density. In extreme cases like heavy metal concerts and black Friday sales events, motion is dominated by physical interactions instead of conventional social norms. Here, we study an active matter model inspired by situations when large groups of people gather at a point of common interest. Our analysis takes an approach developed for jammed granular media and identifies Goldstone modes, soft spots, and stochastic resonance as structurally driven mechanisms for potentially dangerous emergent collective motion.
Observation and imitation of actions performed by humans, androids, and robots: an EMG study
Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.
2015-01-01
Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782
An Evaluation of Automotive Interior Packages Based on Human Ocular and Joint Motor Properties
NASA Astrophysics Data System (ADS)
Tanaka, Yoshiyuki; Rakumatsu, Takeshi; Horiue, Masayoshi; Miyazaki, Tooru; Nishikawa, Kazuo; Nouzawa, Takahide; Tsuji, Toshio
This paper proposes a new evaluation method of an automotive interior package based on human oculomotor and joint-motor properties. Assuming the long-term driving situation in the express high way, the three evaluation indices were designed on i) the ratio of head motion at gazing the driving items; ii) the load torque for maintaining the standard driving posture; and iii) the human force manipulability at the end-point of human extremities. Experiments were carried out for two different interior packages with four subjects who have the special knowledge on the automobile development. Evaluation results demonstrate that the proposed method can quantitatively analyze the driving interior in good agreement with the generally accepted subjective opinion in the automobile industry.
ERIC Educational Resources Information Center
Wollner, Clemens; Deconinck, Frederik J. A.; Parkinson, Jim; Hove, Michael J.; Keller, Peter E.
2012-01-01
Aesthetic theories have long suggested perceptual advantages for prototypical exemplars of a given class of objects or events. Empirical evidence confirmed that morphed (quantitatively averaged) human faces, musical interpretations, and human voices are preferred over most individual ones. In this study, biological human motion was morphed and…
Dynamic three-dimensional model of the coronary circulation
NASA Astrophysics Data System (ADS)
Lehmann, Glen; Gobbi, David G.; Dick, Alexander J.; Starreveld, Yves P.; Quantz, M.; Holdsworth, David W.; Drangova, Maria
2001-05-01
A realistic numerical three-dimensional (3D) model of the dynamics of human coronary arteries has been developed. High- resolution 3D images of the coronary arteries of an excised human heart were obtained using a C-arm based computed tomography (CT) system. Cine bi-plane coronary angiograms were then acquired from a patient with similar coronary anatomy. These angiograms were used to determine the vessel motion, which was applied to the static 3D coronary tree. Corresponding arterial bifurcations were identified in the 3D CT image and in the 2D angiograms. The 3D positions of the angiographic landmarks, which were known throughout the cardiac cycle, were used to warp the 3D image via a non-linear thin-plate spline algorithm. The result was a set or 30 dynamic volumetric images sampling a complete cardiac cycle. To the best of our knowledge, the model presented here is the first dynamic 3D model that provides a true representation of both the geometry and motion of a human coronary artery tree. In the future, similar models can be generated to represent different coronary anatomy and motion. Such models are expected to become an invaluable tool during the development of dynamic imaging techniques such as MRI, multi-slice CT and 3D angiography.
Human-like agents with posture planning ability
NASA Technical Reports Server (NTRS)
Jung, Moon R.; Badler, Norman
1992-01-01
Human body models are geometric structures which may be ultimately controlled by kinematically manipulating their joints, but for animation, it is desirable to control them in terms of task-level goals. We address a fundamental problem in achieving task-level postural goals: controlling massively redundant degrees of freedom. We reduce the degrees of freedom by introducing significant control points and vectors, e.g., pelvis forward vector, palm up vector, and torso up vector, etc. This reduced set of parameters are used to enumerate primitive motions and motion dependencies among them, and thus to select from a small set of alternative postures (e.g., bend vs. squat to lower shoulder height). A plan for a given goal is found by incrementally constructing a goal/constraint set based on the given goal, motion dependencies, collision avoidance requirements, and discovered failures. Global postures satisfying a given goal/constraint set are determined with the help of incremental mental simulation which uses a robust inverse kinematics algorithm. The contributions of the present work are: (1) There is no need to specify beforehand the final goal configuration, which is unrealistic for the human body, and (2) the degrees of freedom problem becomes easier by representing body configurations in terms of 'lumped' control parameters, that is, control points and vectors.
Liu, Kai-Chun; Chan, Chia-Tai
2017-01-01
The proportion of the aging population is rapidly increasing around the world, which will cause stress on society and healthcare systems. In recent years, advances in technology have created new opportunities for automatic activities of daily living (ADL) monitoring to improve the quality of life and provide adequate medical service for the elderly. Such automatic ADL monitoring requires reliable ADL information on a fine-grained level, especially for the status of interaction between body gestures and the environment in the real-world. In this work, we propose a significant change spotting mechanism for periodic human motion segmentation during cleaning task performance. A novel approach is proposed based on the search for a significant change of gestures, which can manage critical technical issues in activity recognition, such as continuous data segmentation, individual variance, and category ambiguity. Three typical machine learning classification algorithms are utilized for the identification of the significant change candidate, including a Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Naive Bayesian (NB) algorithm. Overall, the proposed approach achieves 96.41% in the F1-score by using the SVM classifier. The results show that the proposed approach can fulfill the requirement of fine-grained human motion segmentation for automatic ADL monitoring. PMID:28106853
Time-lapse imaging of human heart motion with switched array UWB radar.
Brovoll, Sverre; Berger, Tor; Paichard, Yoann; Aardal, Øyvind; Lande, Tor Sverre; Hamran, Svein-Erik
2014-10-01
Radar systems for detection of human heartbeats have mostly been single-channel systems with limited spatial resolution. In this paper, a radar system for ultra-wideband (UWB) imaging of the human heart is presented. To make the radar waves penetrate the human tissue the antenna is placed very close to the body. The antenna is an array with eight elements, and an antenna switch system connects the radar to the individual elements in sequence to form an image. Successive images are used to build up time-lapse movies of the beating heart. Measurements on a human test subject are presented and the heart motion is estimated at different locations inside the body. The movies show rhythmic motion consistent with the beating heart, and the location and shape of the reflections correspond well with the expected response form the heart wall. The spatial dependent heart motion is compared to ECG recordings, and it is confirmed that heartbeat modulations are seen in the radar data. This work shows that radar imaging of the human heart may provide valuable information on the mechanical movement of the heart.
Human Interactive Triboelectric Nanogenerator as a Self-Powered Smart Seat.
Chandrasekhar, Arunkumar; Alluri, Nagamalleswara Rao; Saravanakumar, Balasubramaniam; Selvarajan, Sophia; Kim, Sang-Jae
2016-04-20
A lightweight, flexible, cost-effective, and robust, single-electrode-based Smart Seat-Triboelectric Nanogenerator (SS-TENG) is introduced as a promising eco-friendly approach for harvesting energy from the living environment, for use in integrated self-powered systems. An effective method for harvesting biomechanical energy from human motion such as walking, running, and sitting, utilizing widely adaptable everyday contact materials (newspaper, denim, polyethylene covers, and bus cards) is demonstrated. The working mechanism of the SS-TENG is based on the generation and transfer of triboelectric charge carriers between the active layer and user-friendly contact materials. The performance of SS-TENG (52 V and 5.2 μA for a multiunit SS-TENG) is systematically studied and demonstrated in a range of applications including a self-powered passenger seat number indicator and a STOP-indicator using LEDs, using a simple logical circuit. Harvested energy is used as a direct power source to drive 60 blue and green commercially available LEDs and a monochrome LCD. This feasibility study confirms that triboelectric nanogenerators are a suitable technology for energy harvesting from human motion during transportation, which could be used to operate a variety of wireless devices, GPS systems, electronic devices, and other sensors during travel.
Model of human visual-motion sensing
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.
1985-01-01
A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.
Micro-patterned graphene-based sensing skins for human physiological monitoring
NASA Astrophysics Data System (ADS)
Wang, Long; Loh, Kenneth J.; Chiang, Wei-Hung; Manna, Kausik
2018-03-01
Ultrathin, flexible, conformal, and skin-like electronic transducers are emerging as promising candidates for noninvasive and nonintrusive human health monitoring. In this work, a wearable sensing membrane is developed by patterning a graphene-based solution onto ultrathin medical tape, which can then be attached to the skin for monitoring human physiological parameters and physical activity. Here, the sensor is validated for monitoring finger bending/movements and for recognizing hand motion patterns, thereby demonstrating its future potential for evaluating athletic performance, physical therapy, and designing next-generation human-machine interfaces. Furthermore, this study also quantifies the sensor’s ability to monitor eye blinking and radial pulse in real-time, which can find broader applications for the healthcare sector. Overall, the printed graphene-based sensing skin is highly conformable, flexible, lightweight, nonintrusive, mechanically robust, and is characterized by high strain sensitivity.
Identifying postural control and thresholds of instability utilizing a motion-based ATV simulator.
DOT National Transportation Integrated Search
2017-01-01
Our ATV simulator is currently the only one in existence that allows studies of human subjects engaged in active riding, a process that is necessary for ATV operators to perform in order to maintain vehicle control, in a virtual reality environ...
NASA Astrophysics Data System (ADS)
Lu, Zhong-Lin; Sperling, George
2002-10-01
Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Human motion behavior while interacting with an industrial robot.
Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus
2012-01-01
Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.
Computational Hemodynamic Simulation of Human Circulatory System under Altered Gravity
NASA Technical Reports Server (NTRS)
Kim. Chang Sung; Kiris, Cetin; Kwak, Dochan
2003-01-01
A computational hemodynamics approach is presented to simulate the blood flow through the human circulatory system under altered gravity conditions. Numerical techniques relevant to hemodynamics issues are introduced to non-Newtonian modeling for flow characteristics governed by red blood cells, distensible wall motion due to the heart pulse, and capillary bed modeling for outflow boundary conditions. Gravitational body force terms are added to the Navier-Stokes equations to study the effects of gravity on internal flows. Six-type gravity benchmark problems are originally presented to provide the fundamental understanding of gravitational effects on the human circulatory system. For code validation, computed results are compared with steady and unsteady experimental data for non-Newtonian flows in a carotid bifurcation model and a curved circular tube, respectively. This computational approach is then applied to the blood circulation in the human brain as a target problem. A three-dimensional, idealized Circle of Willis configuration is developed with minor arteries truncated based on anatomical data. Demonstrated is not only the mechanism of the collateral circulation but also the effects of gravity on the distensible wall motion and resultant flow patterns.
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Binding biological motion and visual features in working memory.
Ding, Xiaowei; Zhao, Yangfan; Wu, Fan; Lu, Xiqian; Gao, Zaifeng; Shen, Mowei
2015-06-01
Working memory mechanisms for binding have been examined extensively in the last decade, yet few studies have explored bindings relating to human biological motion (BM). Human BM is the most salient and biologically significant kinetic information encountered in everyday life and is stored independently from other visual features (e.g., colors). The current study explored 3 critical issues of BM-related binding in working memory: (a) how many BM binding units can be retained in working memory, (b) whether involuntarily object-based binding occurs during BM binding, and (c) whether the maintenance of BM bindings in working memory requires attention above and beyond that needed to maintain the constituent dimensions. We isolated motion signals of human BM from non-BM sources by using point-light displays as to-be-memorized BM and presented the participants colored BM in a change detection task. We found that working memory capacity for BM-color bindings is rather low; only 1 or 2 BM-color bindings could be retained in working memory regardless of the presentation manners (Experiments 1-3). Furthermore, no object-based encoding took place for colored BM stimuli regardless of the processed dimensions (Experiments 4 and 5). Central executive attention contributes to the maintenance of BM-color bindings, yet maintaining BM bindings in working memory did not require more central attention than did maintaining the constituent dimensions in working memory (Experiment 6). Overall, these results suggest that keeping BM bindings in working memory is a fairly resource-demanding process, yet central executive attention does not play a special role in this cross-module binding. (c) 2015 APA, all rights reserved).
A Feasibility Study of View-independent Gait Identification
2012-03-01
ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision
Dual-body magnetic helical robot for drilling and cargo delivery in human blood vessels
NASA Astrophysics Data System (ADS)
Lee, Wonseo; Jeon, Seungmun; Nam, Jaekwang; Jang, Gunhee
2015-05-01
We propose a novel dual-body magnetic helical robot (DMHR) manipulated by a magnetic navigation system. The proposed DMHR can generate helical motions to navigate in human blood vessels and to drill blood clots by an external rotating magnetic field. It can also generate release motions which are relative rotational motions between dual-bodies to release the carrying cargos to a target region by controlling the magnitude of an external magnetic field. Constraint equations were derived to selectively manipulate helical and release motions by controlling external magnetic fields. The DMHR was prototyped and various experiments were conducted to demonstrate its motions and verify its manipulation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim
2013-03-15
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In thismore » approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of Almost-Equal-To 0.047 {+-} 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of Almost-Equal-To 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary. Conclusions: In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.« less
Discrimination of curvature from motion during smooth pursuit eye movements and fixation.
Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R
2017-09-01
Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.
Human Sensibility Ergonomics Approach to Vehicle Simulator Based on Dynamics
NASA Astrophysics Data System (ADS)
Son, Kwon; Choi, Kyung-Hyun; Yoon, Ji-Sup
Simulators have been used to evaluate drivers' reactions to various transportation products. Most research, however, has concentrated on their technical performance. This paper considers driver's motion perception on a vehicle simulator through the analysis of human sensibility ergonomics. A sensibility ergonomic method is proposed in order to improve the reliability of vehicle simulators. A simulator in a passenger vehicle consists of three main modules such as vehicle dynamics, virtual environment, and motion representation modules. To evaluate drivers' feedback, human perceptions are categorized into a set verbal expressions collected and investigated to find the most appropriate ones for translation and angular accelerations of the simulator. The cut-off frequency of the washout filter in the representation module is selected as one sensibility factor. Sensibility experiments were carried out to find a correlation between the expressions and the cut-off frequency of the filter. This study suggests a methodology to obtain an ergonomic database that can be applied to the sensibility evaluation of dynamic simulators.
Goal attribution to inanimate moving objects by Japanese macaques (Macaca fuscata)
Atsumi, Takeshi; Koda, Hiroki; Masataka, Nobuo
2017-01-01
Humans interpret others’ goals based on motion information, and this capacity contributes to our mental reasoning. The present study sought to determine whether Japanese macaques (Macaca fuscata) perceive goal-directedness in chasing events depicted by two geometric particles. In Experiment 1, two monkeys and adult humans were trained to discriminate between Chasing and Random sequences. We then introduced probe stimuli with various levels of correlation between the particle trajectories to examine whether participants performed the task using higher correlation. Participants chose stimuli with the highest correlations by chance, suggesting that correlations were not the discriminative cue. Experiment 2 examined whether participants focused on particle proximity. Participants differentiated between Chasing and Control sequences; the distance between two particles was identical in both. Results indicated that, like humans, the Japanese macaques did not use physical cues alone to perform the discrimination task and integrated the cues spontaneously. This suggests that goal attribution resulting from motion information is a widespread cognitive phenotype in primate species. PMID:28053305
Dual gait generative models for human motion estimation from a single camera.
Zhang, Xin; Fan, Guoliang
2010-08-01
This paper presents a general gait representation framework for video-based human motion estimation. Specifically, we want to estimate the kinematics of an unknown gait from image sequences taken by a single camera. This approach involves two generative models, called the kinematic gait generative model (KGGM) and the visual gait generative model (VGGM), which represent the kinematics and appearances of a gait by a few latent variables, respectively. The concept of gait manifold is proposed to capture the gait variability among different individuals by which KGGM and VGGM can be integrated together, so that a new gait with unknown kinematics can be inferred from gait appearances via KGGM and VGGM. Moreover, a new particle-filtering algorithm is proposed for dynamic gait estimation, which is embedded with a segmental jump-diffusion Markov Chain Monte Carlo scheme to accommodate the gait variability in a long observed sequence. The proposed algorithm is trained from the Carnegie Mellon University (CMU) Mocap data and tested on the Brown University HumanEva data with promising results.
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
ERIC Educational Resources Information Center
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception…
2011-03-24
HOG) dismount detector that cues based off of the presence of human skin (to limit false detections and to reduce the search space complexity). While...wave infrared wavelengths in addition to the visible spectra in order to identify human skin [29] and selectively scan the image for the presence of...and the angle of the acqui- sition camera. Consequently, it is expected that limitations exist on the humans ’ range of motion or stance that still
NASA Astrophysics Data System (ADS)
Bai, Yang; Tofel, Pavel; Hadas, Zdenek; Smilek, Jan; Losak, Petr; Skarvada, Pavel; Macku, Robert
2018-06-01
The capability of using a linear kinetic energy harvester - A cantilever structured piezoelectric energy harvester - to harvest human motions in the real-life activities is investigated. The whole loop of the design, simulation, fabrication and test of the energy harvester is presented. With the smart wristband/watch sized energy harvester, a root mean square of the output power of 50 μW is obtained from the real-life hand-arm motion in human's daily life. Such a power is enough to make some low power consumption sensors to be self-powered. This paper provides a good and reliable comparison to those with nonlinear structures. It also helps the designers to consider whether to choose a nonlinear structure or not in a particular energy harvester based on different application scenarios.
NASA Astrophysics Data System (ADS)
Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter
2002-06-01
In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.
Development of an empirically based dynamic biomechanical strength model
NASA Technical Reports Server (NTRS)
Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.
1992-01-01
The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.
Ricci, Luca; Formica, Domenico; Sparaci, Laura; Lasorsa, Francesca Romana; Taffoni, Fabrizio; Tamilia, Eleonora; Guglielmelli, Eugenio
2014-01-09
Recent advances in wearable sensor technologies for motion capture have produced devices, mainly based on magneto and inertial measurement units (M-IMU), that are now suitable for out-of-the-lab use with children. In fact, the reduced size, weight and the wireless connectivity meet the requirement of minimum obtrusivity and give scientists the possibility to analyze children's motion in daily life contexts. Typical use of magneto and inertial measurement units (M-IMU) motion capture systems is based on attaching a sensing unit to each body segment of interest. The correct use of this setup requires a specific calibration methodology that allows mapping measurements from the sensors' frames of reference into useful kinematic information in the human limbs' frames of reference. The present work addresses this specific issue, presenting a calibration protocol to capture the kinematics of the upper limbs and thorax in typically developing (TD) children. The proposed method allows the construction, on each body segment, of a meaningful system of coordinates that are representative of real physiological motions and that are referred to as functional frames (FFs). We will also present a novel cost function for the Levenberg-Marquardt algorithm, to retrieve the rotation matrices between each sensor frame (SF) and the corresponding FF. Reported results on a group of 40 children suggest that the method is repeatable and reliable, opening the way to the extensive use of this technology for out-of-the-lab motion capture in children.
Environmental criteria for human comfort. A study of the related literature
NASA Technical Reports Server (NTRS)
Jacobson, I. D.
1974-01-01
The data presented has for the most part been extracted from existing in-house and memoranda reports. The variables considered are motion, noise, temperature and pressure. The report is broken down into chapters for each of the environmental variables and criteria proposed based on the existing literature.
Fong, Daniel Tik-Pui; Chan, Yue-Yan
2010-01-01
Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed. PMID:22163542
Fong, Daniel Tik-Pui; Chan, Yue-Yan
2010-01-01
Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed.
Expressive facial animation synthesis by learning speech coarticulation and expression spaces.
Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth
2006-01-01
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.
The mirror game as a paradigm for studying the dynamics of two people improvising motion together
Noy, Lior; Dekel, Erez; Alon, Uri
2011-01-01
Joint improvisation is the creative action of two or more people without a script or designated leader. Examples include improvisational theater and music, and day-to-day activities such as conversations. In joint improvisation, novel action is created, emerging from the interaction between people. Although central to creative processes and social interaction, joint improvisation remains largely unexplored due to the lack of experimental paradigms. Here we introduce a paradigm based on a theater practice called the mirror game. We measured the hand motions of two people mirroring each other at high temporal and spatial resolution. We focused on expert actors and musicians skilled in joint improvisation. We found that players can jointly create novel complex motion without a designated leader, synchronized to less than 40 ms. In contrast, we found that designating one player as leader deteriorated performance: The follower showed 2–3 Hz oscillation around the leader's smooth trajectory, decreasing synchrony and reducing the range of velocities reached. A mathematical model suggests a mechanism for these observations based on mutual agreement on future motion in mirrored reactive–predictive controllers. This is a step toward understanding the human ability to create novelty by improvising together. PMID:22160696
Torres, Luis G.; Kuntz, Alan; Gilbert, Hunter B.; Swaney, Philip J.; Hendrick, Richard J.; Webster, Robert J.; Alterovitz, Ron
2015-01-01
Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot’s shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot’s tip. However, the robot’s unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot’s shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles. PMID:26413381
Torres, Luis G; Kuntz, Alan; Gilbert, Hunter B; Swaney, Philip J; Hendrick, Richard J; Webster, Robert J; Alterovitz, Ron
2015-05-01
Concentric tube robots are thin, tentacle-like devices that can move along curved paths and can potentially enable new, less invasive surgical procedures. Safe and effective operation of this type of robot requires that the robot's shaft avoid sensitive anatomical structures (e.g., critical vessels and organs) while the surgeon teleoperates the robot's tip. However, the robot's unintuitive kinematics makes it difficult for a human user to manually ensure obstacle avoidance along the entire tentacle-like shape of the robot's shaft. We present a motion planning approach for concentric tube robot teleoperation that enables the robot to interactively maneuver its tip to points selected by a user while automatically avoiding obstacles along its shaft. We achieve automatic collision avoidance by precomputing a roadmap of collision-free robot configurations based on a description of the anatomical obstacles, which are attainable via volumetric medical imaging. We also mitigate the effects of kinematic modeling error in reaching the goal positions by adjusting motions based on robot tip position sensing. We evaluate our motion planner on a teleoperated concentric tube robot and demonstrate its obstacle avoidance and accuracy in environments with tubular obstacles.