Human motion analysis with detection of subpart deformations
NASA Astrophysics Data System (ADS)
Wang, Juhui; Lorette, Guy; Bouthemy, Patrick
1992-06-01
One essential constraint used in 3-D motion estimation from optical projections is the rigidity assumption. Because of muscle deformations in human motion, this rigidity requirement is often violated for some regions on the human body. Global methods usually fail to bring stable solutions. This paper presents a model-based approach to combating the effect of muscle deformations in human motion analysis. The approach developed is based on two main stages. In the first stage, the human body is partitioned into different areas, where each area is consistent with a general motion model (not necessarily corresponding to a physical existing motion pattern). In the second stage, the regions are eliminated under the hypothesis that they are not induced by a specific human motion pattern. Each hypothesis is generated by making use of specific knowledge about human motion. A global method is used to estimate the 3-D motion parameters in basis of valid segments. Experiments based on a cycling motion sequence are presented.
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1993-01-01
In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1990-01-01
In the study of the dynamics and kinematics of the human body, a wide variety of technologies was developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development coupled with recent advances in video technology have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System to develop data on shirt-sleeved and space-suited human performance in order to plan efficient on orbit intravehicular and extravehicular activities. The system is described.
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-01-01
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks. PMID:26729123
Yu, Zhaoyuan; Yuan, Linwang; Luo, Wen; Feng, Linyao; Lv, Guonian
2015-12-30
Passive infrared (PIR) motion detectors, which can support long-term continuous observation, are widely used for human motion analysis. Extracting all possible trajectories from the PIR sensor networks is important. Because the PIR sensor does not log location and individual information, none of the existing methods can generate all possible human motion trajectories that satisfy various spatio-temporal constraints from the sensor activation log data. In this paper, a geometric algebra (GA)-based approach is developed to generate all possible human trajectories from the PIR sensor network data. Firstly, the representation of the geographical network, sensor activation response sequences and the human motion are represented as algebraic elements using GA. The human motion status of each sensor activation are labeled using the GA-based trajectory tracking. Then, a matrix multiplication approach is developed to dynamically generate the human trajectories according to the sensor activation log and the spatio-temporal constraints. The method is tested with the MERL motion database. Experiments show that our method can flexibly extract the major statistical pattern of the human motion. Compared with direct statistical analysis and tracklet graph method, our method can effectively extract all possible trajectories of the human motion, which makes it more accurate. Our method is also likely to provides a new way to filter other passive sensor log data in sensor networks.
Recurrence plots and recurrence quantification analysis of human motion data
NASA Astrophysics Data System (ADS)
Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam; Szczesna, Agnieszka; Wojciechowski, Konrad
2016-06-01
The authors present exemplary application of recurrence plots, cross recurrence plots and recurrence quantification analysis for the purpose of exploration of experimental time series describing selected aspects of human motion. Time series were extracted from treadmill gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland by means of the Vicon system. Analysis was focused on the time series representing movements of hip, knee, ankle and wrist joints in the sagittal plane.
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.
Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik
2016-01-01
Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.
Paths of Movement for Selected Body Segments During Typical Pilot Tasks
1976-03-01
11 Scope ........... ...................... 12 Past Human Motion Investigations ........... ... 13 Experimental Techniques in Human...of.literature has been generated during the past few decades in the field of human-motion recording and analysis. However, in most of these studies body...to meet the COMBIMAN model requirements. Past Human Motion Investigations The 15th century artist-scientist, Leonardo da Vinci, is generally credited
Motion based parsing for video from observational psychology
NASA Astrophysics Data System (ADS)
Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray
2006-01-01
In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.
Human Factors Vehicle Displacement Analysis: Engineering In Motion
NASA Technical Reports Server (NTRS)
Atencio, Laura Ashley; Reynolds, David; Robertson, Clay
2010-01-01
While positioned on the launch pad at the Kennedy Space Center, tall stacked launch vehicles are exposed to the natural environment. Varying directional winds and vortex shedding causes the vehicle to sway in an oscillating motion. The Human Factors team recognizes that vehicle sway may hinder ground crew operation, impact the ground system designs, and ultimately affect launch availability . The objective of this study is to physically simulate predicted oscillation envelopes identified by analysis. and conduct a Human Factors Analysis to assess the ability to carry out essential Upper Stage (US) ground operator tasks based on predicted vehicle motion.
Hierarchical Aligned Cluster Analysis for Temporal Clustering of Human Motion.
Zhou, Feng; De la Torre, Fernando; Hodgins, Jessica K
2013-03-01
Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.
Full-motion video analysis for improved gender classification
NASA Astrophysics Data System (ADS)
Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.
2014-06-01
The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.
Peelen, Marius V; Wiggett, Alison J; Downing, Paul E
2006-03-16
Accurate perception of the actions and intentions of other people is essential for successful interactions in a social environment. Several cortical areas that support this process respond selectively in fMRI to static and dynamic displays of human bodies and faces. Here we apply pattern-analysis techniques to arrive at a new understanding of the neural response to biological motion. Functionally defined body-, face-, and motion-selective visual areas all responded significantly to "point-light" human motion. Strikingly, however, only body selectivity was correlated, on a voxel-by-voxel basis, with biological motion selectivity. We conclude that (1) biological motion, through the process of structure-from-motion, engages areas involved in the analysis of the static human form; (2) body-selective regions in posterior fusiform gyrus and posterior inferior temporal sulcus overlap with, but are distinct from, face- and motion-selective regions; (3) the interpretation of region-of-interest findings may be substantially altered when multiple patterns of selectivity are considered.
A marker-free system for the analysis of movement disabilities.
Legrand, L; Marzani, F; Dusserre, L
1998-01-01
A major step toward improving the treatments of disabled persons may be achieved by using motion analysis equipment. We are developing such a system. It allows the analysis of plane human motion (e.g. gait) without using the tracking of markers. The system is composed of one fixed camera which acquires an image sequence of a human in motion. Then the treatment is divided into two steps: first, a large number of pixels belonging to the boundaries of the human body are extracted at each acquisition time. Secondly, a two-dimensional model of the human body, based on tapered superquadrics, is successively matched with the sets of pixels previously extracted; a specific fuzzy clustering process is used for this purpose. Moreover, an optical flow procedure gives a prediction of the model location at each acquisition time from its location at the previous time. Finally we present some results of this process applied to a leg in motion.
Cell motion predicts human epidermal stemness
Toki, Fujio; Tate, Sota; Imai, Matome; Matsushita, Natsuki; Shiraishi, Ken; Sayama, Koji; Toki, Hiroshi; Higashiyama, Shigeki
2015-01-01
Image-based identification of cultured stem cells and noninvasive evaluation of their proliferative capacity advance cell therapy and stem cell research. Here we demonstrate that human keratinocyte stem cells can be identified in situ by analyzing cell motion during their cultivation. Modeling experiments suggested that the clonal type of cultured human clonogenic keratinocytes can be efficiently determined by analysis of early cell movement. Image analysis experiments demonstrated that keratinocyte stem cells indeed display a unique rotational movement that can be identified as early as the two-cell stage colony. We also demonstrate that α6 integrin is required for both rotational and collective cell motion. Our experiments provide, for the first time, strong evidence that cell motion and epidermal stemness are linked. We conclude that early identification of human keratinocyte stem cells by image analysis of cell movement is a valid parameter for quality control of cultured keratinocytes for transplantation. PMID:25897083
Vakanski, A; Ferguson, JM; Lee, S
2016-01-01
Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
Low-cost human motion capture system for postural analysis onboard ships
NASA Astrophysics Data System (ADS)
Nocerino, Erica; Ackermann, Sebastiano; Del Pizzo, Silvio; Menna, Fabio; Troisi, Salvatore
2011-07-01
The study of human equilibrium, also known as postural stability, concerns different research sectors (medicine, kinesiology, biomechanics, robotics, sport) and is usually performed employing motion analysis techniques for recording human movements and posture. A wide range of techniques and methodologies has been developed, but the choice of instrumentations and sensors depends on the requirement of the specific application. Postural stability is a topic of great interest for the maritime community, since ship motions can make demanding and difficult the maintenance of the upright stance with hazardous consequences for the safety of people onboard. The need of capturing the motion of an individual standing on a ship during its daily service does not permit to employ optical systems commonly used for human motion analysis. These sensors are not designed for operating in disadvantageous environmental conditions (water, wetness, saltiness) and with not optimal lighting. The solution proposed in this study consists in a motion acquisition system that could be easily usable onboard ships. It makes use of two different methodologies: (I) motion capture with videogrammetry and (II) motion measurement with Inertial Measurement Unit (IMU). The developed image-based motion capture system, made up of three low-cost, light and compact video cameras, was validated against a commercial optical system and then used for testing the reliability of the inertial sensors. In this paper, the whole process of planning, designing, calibrating, and assessing the accuracy of the motion capture system is reported and discussed. Results from the laboratory tests and preliminary campaigns in the field are presented.
Analysis in Motion Initiative – Human Machine Intelligence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaha, Leslie
As computers and machines become more pervasive in our everyday lives, we are looking for ways for humans and machines to work more intelligently together. How can we help machines understand their users so the team can do smarter things together? The Analysis in Motion Initiative is advancing the science of human machine intelligence — creating human-machine teams that work better together to make correct, useful, and timely interpretations of data.
Discomfort Evaluation of Truck Ingress/Egress Motions Based on Biomechanical Analysis
Choi, Nam-Chul; Lee, Sang Hun
2015-01-01
This paper presents a quantitative discomfort evaluation method based on biomechanical analysis results for human body movement, as well as its application to an assessment of the discomfort for truck ingress and egress. In this study, the motions of a human subject entering and exiting truck cabins with different types, numbers, and heights of footsteps were first measured using an optical motion capture system and load sensors. Next, the maximum voluntary contraction (MVC) ratios of the muscles were calculated through a biomechanical analysis of the musculoskeletal human model for the captured motion. Finally, the objective discomfort was evaluated using the proposed discomfort model based on the MVC ratios. To validate this new discomfort assessment method, human subject experiments were performed to investigate the subjective discomfort levels through a questionnaire for comparison with the objective discomfort levels. The validation results showed that the correlation between the objective and subjective discomforts was significant and could be described by a linear regression model. PMID:26067194
Modal-Power-Based Haptic Motion Recognition
NASA Astrophysics Data System (ADS)
Kasahara, Yusuke; Shimono, Tomoyuki; Kuwahara, Hiroaki; Sato, Masataka; Ohnishi, Kouhei
Motion recognition based on sensory information is important for providing assistance to human using robots. Several studies have been carried out on motion recognition based on image information. However, in the motion of humans contact with an object can not be evaluated precisely by image-based recognition. This is because the considering force information is very important for describing contact motion. In this paper, a modal-power-based haptic motion recognition is proposed; modal power is considered to reveal information on both position and force. Modal power is considered to be one of the defining features of human motion. A motion recognition algorithm based on linear discriminant analysis is proposed to distinguish between similar motions. Haptic information is extracted using a bilateral master-slave system. Then, the observed motion is decomposed in terms of primitive functions in a modal space. The experimental results show the effectiveness of the proposed method.
Ubiquitous human upper-limb motion estimation using wearable sensors.
Zhang, Zhi-Qiang; Wong, Wai-Choong; Wu, Jian-Kang
2011-07-01
Human motion capture technologies have been widely used in a wide spectrum of applications, including interactive game and learning, animation, film special effects, health care, navigation, and so on. The existing human motion capture techniques, which use structured multiple high-resolution cameras in a dedicated studio, are complicated and expensive. With the rapid development of microsensors-on-chip, human motion capture using wearable microsensors has become an active research topic. Because of the agility in movement, upper-limb motion estimation has been regarded as the most difficult problem in human motion capture. In this paper, we take the upper limb as our research subject and propose a novel ubiquitous upper-limb motion estimation algorithm, which concentrates on modeling the relationship between upper-arm movement and forearm movement. A link structure with 5 degrees of freedom (DOF) is proposed to model the human upper-limb skeleton structure. Parameters are defined according to Denavit-Hartenberg convention, forward kinematics equations are derived, and an unscented Kalman filter is deployed to estimate the defined parameters. The experimental results have shown that the proposed upper-limb motion capture and analysis algorithm outperforms other fusion methods and provides accurate results in comparison to the BTS optical motion tracker.
Time-frequency analysis of human motion during rhythmic exercises.
Omkar, S N; Vyas, Khushi; Vikranth, H N
2011-01-01
Biomechanical signals due to human movements during exercise are represented in time-frequency domain using Wigner Distribution Function (WDF). Analysis based on WDF reveals instantaneous spectral and power changes during a rhythmic exercise. Investigations were carried out on 11 healthy subjects who performed 5 cycles of sun salutation, with a body-mounted Inertial Measurement Unit (IMU) as a motion sensor. Variance of Instantaneous Frequency (I.F) and Instantaneous Power (I.P) for performance analysis of the subject is estimated using one-way ANOVA model. Results reveal that joint Time-Frequency analysis of biomechanical signals during motion facilitates a better understanding of grace and consistency during rhythmic exercise.
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-02-06
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.
Teaching Instrumentation and Methodology in Human Motion Analysis
2001-10-25
TEACHING INSTRUMENTATION AND METHODOLOGY IN HUMAN MOTION ANALYSIS V. Medved Faculty of Physical Education , University of Zagreb, Zagreb, Croatia...the introducement of teaching curricula to implement the apropriate knowledge. Problems are discussed of educating professionals and disseminating...University of Zagreb, undergraduate teaching of locomotion biomechanics is provided only at the Faculty of Physical Education . Following a need to teach
NASA Astrophysics Data System (ADS)
Takasugi, Shoji; Yamamoto, Tomohito; Muto, Yumiko; Abe, Hiroyuki; Miyake, Yoshihiro
The purpose of this study is to clarify the effects of timing control of utterance and body motion in human-robot interaction. Our previous study has already revealed the correlation of timing of utterance and body motion in human-human communication. Here we proposed a timing control model based on our previous research and estimated its influence to realize human-like communication using a questionnaire method. The results showed that the difference of effectiveness between the communication with the timing control model and that without it was observed. In addition, elderly people evaluated the communication with timing control much higher than younger people. These results show not only the importance of timing control of utterance and body motion in human communication but also its effectiveness for realizing human-like human-robot interaction.
Real-time marker-free motion capture system using blob feature analysis
NASA Astrophysics Data System (ADS)
Park, Chang-Joon; Kim, Sung-Eun; Kim, Hong-Seok; Lee, In-Ho
2005-02-01
This paper presents a real-time marker-free motion capture system which can reconstruct 3-dimensional human motions. The virtual character of the proposed system mimics the motion of an actor in real-time. The proposed system captures human motions by using three synchronized CCD cameras and detects the root and end-effectors of an actor such as a head, hands, and feet by exploiting the blob feature analysis. And then, the 3-dimensional positions of end-effectors are restored and tracked by using Kalman filter. At last, the positions of the intermediate joint are reconstructed by using anatomically constrained inverse kinematics algorithm. The proposed system was implemented under general lighting conditions and we confirmed that the proposed system could reconstruct motions of a lot of people wearing various clothes in real-time stably.
Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.
Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen
2017-11-23
Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.
NASA Astrophysics Data System (ADS)
Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.
2012-02-01
Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.
New Directions in Assessment: Biomechanical Development.
ERIC Educational Resources Information Center
Fortney, Virginia
Biomechanical analysis of human movement seeks to relate observable motions or configurations of the body to the forces that act to produce those motions or maintain those configurations. These are identified as force-motion relationships. In assessing force-motion relationships in children's movement, the process type assessment, Ohio State…
Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments
Sun, Tongyang; Duan, Lihong; Wang, Yulong
2017-01-01
The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575
Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †
Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco
2016-01-01
Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394
a Study on Impact Analysis of Side Kick in Taekwondo
NASA Astrophysics Data System (ADS)
Lee, Jung-Hyun; Lee, Young-Shin; Han, Kyu-Hyun
Taekwondo is a martial art form and sport that uses the hands and feet for attack and defense. Taekwondo basic motion is composed of the breaking, competition and poomsea motions. The side kick is one of the most important breaking motions. The side kick with the front foot can be made in two steps. In the first step, the front foot is extended forward from the back stance free-fighting position. For the second step, the rear foot is followed simultaneously. Then, the side kick is executed while the entire body weight rests on the rear foot. In this paper, the impact analysis on a human model for kicking posture was carried out. The ADAMS/LifeMOD used numerical modeling and simulation for the side kick. The numerical human models for assailant and opponent in competition motion were developed. The maximum impact force on the human body was obtained by experiment and was applied to impact simulation. As a result, the impact displacement and velocity of the numerical human model were investigated.
NASA Astrophysics Data System (ADS)
Radzicki, Vincent R.; Boutte, David; Taylor, Paul; Lee, Hua
2017-05-01
Radar based detection of human targets behind walls or in dense urban environments is an important technical challenge with many practical applications in security, defense, and disaster recovery. Radar reflections from a human can be orders of magnitude weaker than those from objects encountered in urban settings such as walls, cars, or possibly rubble after a disaster. Furthermore, these objects can act as secondary reflectors and produce multipath returns from a person. To mitigate these issues, processing of radar return data needs to be optimized for recognizing human motion features such as walking, running, or breathing. This paper presents a theoretical analysis on the modulation effects human motion has on the radar waveform and how high levels of multipath can distort these motion effects. From this analysis, an algorithm is designed and optimized for tracking human motion in heavily clutter environments. The tracking results will be used as the fundamental detection/classification tool to discriminate human targets from others by identifying human motion traits such as predictable walking patterns and periodicity in breathing rates. The theoretical formulations will be tested against simulation and measured data collected using a low power, portable see-through-the-wall radar system that could be practically deployed in real-world scenarios. Lastly, the performance of the algorithm is evaluated in a series of experiments where both a single person and multiple people are moving in an indoor, cluttered environment.
Human comfort response to random motions with a dominant pitching motion
NASA Technical Reports Server (NTRS)
Stone, R. W., Jr.
1980-01-01
The effects of random pitching velocities on passenger ride comfort response were examined on the NASA Langley Visual Motion Simulator. The effects of power spectral density shape and frequency ranges from 0 to 2 Hz were studied. The subjective rating data and the physical motion data obtained are presented. No attempt at interpretation or detailed analysis of the data is made. Motions in all degrees of freedom existed as well as the intended pitching motion, because of the characteristics of the simulator. These unwanted motions may have introduced some interactive effects on passenger responses which should be considered in any analysis of the data.
Emergent Structural Mechanisms for High-Density Collective Motion Inspired by Human Crowds
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Sumpter, David T. J.; Silverberg, Jesse L.
2016-11-01
Collective motion of large human crowds often depends on their density. In extreme cases like heavy metal concerts and black Friday sales events, motion is dominated by physical interactions instead of conventional social norms. Here, we study an active matter model inspired by situations when large groups of people gather at a point of common interest. Our analysis takes an approach developed for jammed granular media and identifies Goldstone modes, soft spots, and stochastic resonance as structurally driven mechanisms for potentially dangerous emergent collective motion.
Numerical integration and optimization of motions for multibody dynamic systems
NASA Astrophysics Data System (ADS)
Aguilar Mayans, Joan
This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.
Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection
NASA Astrophysics Data System (ADS)
Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing
2016-04-01
Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local means and local magnitudes that facilitate a more natural decomposition than that using the cubic spline approach of EMD. In this paper, we apply the UWB radar system in through-wall human detections and present a method to characterize human's motions. We start with a walker's motion model and periodic motion features are given the analysis of the experimental data based on the combination of the LMT and fast Fourier Transform (FFT). The characteristics of human's motions including respiration, swing arms and legs, and fluctuations of the torso are extracted. At last, we calculate the actual distance between the human and the wall. This work was supported in part by National Natural Science Foundation of China under Grant 41574109 and 41430322.
Human detection and motion analysis at security points
NASA Astrophysics Data System (ADS)
Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.
2003-08-01
This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.
Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M
2017-02-19
Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
Modelling of the Human Knee Joint Supported by Active Orthosis
NASA Astrophysics Data System (ADS)
Musalimov, V.; Monahov, Y.; Tamre, M.; Rõbak, D.; Sivitski, A.; Aryassov, G.; Penkov, I.
2018-02-01
The article discusses motion of a healthy knee joint in the sagittal plane and motion of an injured knee joint supported by an active orthosis. A kinematic scheme of a mechanism for the simulation of a knee joint motion is developed and motion of healthy and injured knee joints are modelled in Matlab. Angles between links, which simulate the femur and tibia are controlled by Simulink block of Model predictive control (MPC). The results of simulation have been compared with several samples of real motion of the human knee joint obtained from motion capture systems. On the basis of these analyses and also of the analysis of the forces in human lower limbs created at motion, an active smart orthosis is developed. The orthosis design was optimized to achieve an energy saving system with sufficient anatomy, necessary reliability, easy exploitation and low cost. With the orthosis it is possible to unload the knee joint, and also partially or fully compensate muscle forces required for the bending of the lower limb.
Human Perception of Ambiguous Inertial Motion Cues
NASA Technical Reports Server (NTRS)
Zhang, Guan-Lu
2010-01-01
Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that the GIF resolution hypothesis is completely valid for non-rotational periodic motions. Additionally, human perception of translation is impaired without visual or spatial reference. The performance of ground-base subjects in estimating tilt after brief training is comparable with that of crewmembers without training.
A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-01-01
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
NASA Technical Reports Server (NTRS)
Jackson, Mariea Dunn; Dischinger, Charles; Stambolian, Damon; Henderson, Gena
2012-01-01
Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a Primitive motion capture library. The Library will be used by the human factors engineering in the future to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the Primitive models are being developed for the library the project has selected several current human factors issues to be addressed for the SLS and Orion launch systems. This paper explains how the Motion Capture of unique ground systems activities are being used to verify the human factors analysis requirements for ground system used to process the STS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.
DOT National Transportation Integrated Search
1971-03-01
An analysis was made of methods for measuring vehicle occupant motion during crash or impact conditions. The purpose of the measurements is to evaluate restraint performance using human, anthropometric dummy, or animal occupants. A detailed Fourier f...
Camera systems in human motion analysis for biomedical applications
NASA Astrophysics Data System (ADS)
Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.
2015-05-01
Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.
Ida, Hirofumi; Fukuhara, Kazunobu; Kusubori, Seiji; Ishii, Motonobu
2011-09-01
Computer graphics of digital human models can be used to display human motions as visual stimuli. This study presents our technique for manipulating human motion with a forward kinematics calculation without violating anatomical constraints. A motion modulation of the upper extremity was conducted by proportionally modulating the anatomical joint angular velocity calculated by motion analysis. The effect of this manipulation was examined in a tennis situation--that is, the receiver's performance of predicting ball direction when viewing a digital model of the server's motion derived by modulating the angular velocities of the forearm or that of the elbow during the forward swing. The results showed that the faster the server's forearm pronated, the more the receiver's anticipation of the ball direction tended to the left side of the serve box. In contrast, the faster the server's elbow extended, the more the receiver's anticipation of the ball direction tended to the right. This suggests that tennis players are sensitive to the motion modulation of their opponent's racket-arm.
Fong, Daniel Tik-Pui; Chan, Yue-Yan
2010-01-01
Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed. PMID:22163542
Fong, Daniel Tik-Pui; Chan, Yue-Yan
2010-01-01
Wearable motion sensors consisting of accelerometers, gyroscopes and magnetic sensors are readily available nowadays. The small size and low production costs of motion sensors make them a very good tool for human motions analysis. However, data processing and accuracy of the collected data are important issues for research purposes. In this paper, we aim to review the literature related to usage of inertial sensors in human lower limb biomechanics studies. A systematic search was done in the following search engines: ISI Web of Knowledge, Medline, SportDiscus and IEEE Xplore. Thirty nine full papers and conference abstracts with related topics were included in this review. The type of sensor involved, data collection methods, study design, validation methods and its applications were reviewed.
On the dynamics of a human body model.
NASA Technical Reports Server (NTRS)
Huston, R. L.; Passerello, C. E.
1971-01-01
Equations of motion for a model of the human body are developed. Basically, the model consists of an elliptical cylinder representing the torso, together with a system of frustrums of elliptical cones representing the limbs. They are connected to the main body and each other by hinges and ball and socket joints. Vector, tensor, and matrix methods provide a systematic organization of the geometry. The equations of motion are developed from the principles of classical mechanics. The solution of these equations then provide the displacement and rotation of the main body when the external forces and relative limb motions are specified. Three simple example motions are studied to illustrate the method. The first is an analysis and comparison of simple lifting on the earth and the moon. The second is an elementary approach to underwater swimming, including both viscous and inertia effects. The third is an analysis of kicking motion and its effect upon a vertically suspended man such as a parachutist.
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Gait Analysis by High School Students
ERIC Educational Resources Information Center
Heck, Andre; van Dongen, Caroline
2008-01-01
Human walking is a complicated motion. Movement scientists have developed various research methods to study gait. This article describes how a high school student collected and analysed high quality gait data in much the same way that movement scientists do, via the recording and measurement of motions with a video analysis tool and via…
Large-eddy simulation of human-induced contaminant transport in room compartments.
Choi, J-I; Edwards, J R
2012-02-01
A large-eddy simulation is used to investigate contaminant transport owing to complex human and door motions and vent-system activity in room compartments where a contaminated and clean room are connected by a vestibule. Human and door motions are simulated with an immersed boundary procedure. We demonstrate the details of contaminant transport owing to human- and door-motion-induced wake development during a short-duration event involving the movement of a person (or persons) from a contaminated room, through a vestibule, into a clean room. Parametric studies that capture the effects of human walking pattern, door operation, over-pressure level, and vestibule size are systematically conducted. A faster walking speed results in less mass transport from the contaminated room into the clean room. The net effect of increasing the volume of the vestibule is to reduce the contaminant transport. The results show that swinging-door motion is the dominant transport mechanism and that human-induced wake motion enhances compartment-to-compartment transport. The effect of human activity on contaminant transport may be important in design and operation of clean or isolation rooms in chemical or pharmaceutical industries and intensive care units for airborne infectious disease control in a hospital. The present simulations demonstrate details of contaminant transport in such indoor environments during human motion events and show that simulation-based sensitivity analysis can be utilized for the diagnosis of contaminant infiltration and for better environmental protection. © 2011 John Wiley & Sons A/S.
NASA Technical Reports Server (NTRS)
Searcy, Brittani
2017-01-01
Using virtual environments to assess complex large scale human tasks provides timely and cost effective results to evaluate designs and to reduce operational risks during assembly and integration of the Space Launch System (SLS). NASA's Marshall Space Flight Center (MSFC) uses a suite of tools to conduct integrated virtual analysis during the design phase of the SLS Program. Siemens Jack is a simulation tool that allows engineers to analyze human interaction with CAD designs by placing a digital human model into the environment to test different scenarios and assess the design's compliance to human factors requirements. Engineers at MSFC are using Jack in conjunction with motion capture and virtual reality systems in MSFC's Virtual Environments Lab (VEL). The VEL provides additional capability beyond standalone Jack to record and analyze a person performing a planned task to assemble the SLS at Kennedy Space Center (KSC). The VEL integrates Vicon Blade motion capture system, Siemens Jack, Oculus Rift, and other virtual tools to perform human factors assessments. By using motion capture and virtual reality, a more accurate breakdown and understanding of how an operator will perform a task can be gained. By virtual analysis, engineers are able to determine if a specific task is capable of being safely performed by both a 5% (approx. 5ft) female and a 95% (approx. 6'1) male. In addition, the analysis will help identify any tools or other accommodations that may to help complete the task. These assessments are critical for the safety of ground support engineers and keeping launch operations on schedule. Motion capture allows engineers to save and examine human movements on a frame by frame basis, while virtual reality gives the actor (person performing a task in the VEL) an immersive view of the task environment. This presentation will discuss the need of human factors for SLS and the benefits of analyzing tasks in NASA MSFC's VEL.
NASA Astrophysics Data System (ADS)
Hahn, Markus; Barrois, Björn; Krüger, Lars; Wöhler, Christian; Sagerer, Gerhard; Kummert, Franz
2010-09-01
This study introduces an approach to model-based 3D pose estimation and instantaneous motion analysis of the human hand-forearm limb in the application context of safe human-robot interaction. 3D pose estimation is performed using two approaches: The Multiocular Contracting Curve Density (MOCCD) algorithm is a top-down technique based on pixel statistics around a contour model projected into the images from several cameras. The Iterative Closest Point (ICP) algorithm is a bottom-up approach which uses a motion-attributed 3D point cloud to estimate the object pose. Due to their orthogonal properties, a fusion of these algorithms is shown to be favorable. The fusion is performed by a weighted combination of the extracted pose parameters in an iterative manner. The analysis of object motion is based on the pose estimation result and the motion-attributed 3D points belonging to the hand-forearm limb using an extended constraint-line approach which does not rely on any temporal filtering. A further refinement is obtained using the Shape Flow algorithm, a temporal extension of the MOCCD approach, which estimates the temporal pose derivative based on the current and the two preceding images, corresponding to temporal filtering with a short response time of two or at most three frames. Combining the results of the two motion estimation stages provides information about the instantaneous motion properties of the object. Experimental investigations are performed on real-world image sequences displaying several test persons performing different working actions typically occurring in an industrial production scenario. In all example scenes, the background is cluttered, and the test persons wear various kinds of clothes. For evaluation, independently obtained ground truth data are used. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Rab, George T.
1988-02-01
Three-dimensional human motion analysis has been used for complex kinematic description of abnormal gait in children with neuromuscular disease. Multiple skin markers estimate skeletal segment position, and a sorting and smoothing routine provides marker trajectories. The position and orientation of the moving skeleton in space are derived mathematically from the marker positions, and joint motions are calculated from the Eulerian transformation matrix between linked proximal and distal skeletal segments. Reproduceability has been excellent, and the technique has proven to be a useful adjunct to surgical planning.
Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis
Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan
2015-01-01
Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
address fundamental research problems of representation and invariant description of3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.
3D Data Acquisition Platform for Human Activity Understanding
2016-03-02
address fundamental research problems of representation and invariant description of 3D data, human motion modeling and applications of human activity analysis, and computational optimization of large-scale 3D data.
Human error identification for laparoscopic surgery: Development of a motion economy perspective.
Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong
2015-09-01
This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Laban movement analysis to classify emotions from motion
NASA Astrophysics Data System (ADS)
Dewan, Swati; Agarwal, Shubham; Singh, Navjyoti
2018-04-01
In this paper, we present the study of Laban Movement Analysis (LMA) to understand basic human emotions from nonverbal human behaviors. While there are a lot of studies on understanding behavioral patterns based on natural language processing and speech processing applications, understanding emotions or behavior from non-verbal human motion is still a very challenging and unexplored field. LMA provides a rich overview of the scope of movement possibilities. These basic elements can be used for generating movement or for describing movement. They provide an inroad to understanding movement and for developing movement efficiency and expressiveness. Each human being combines these movement factors in his/her own unique way and organizes them to create phrases and relationships which reveal personal, artistic, or cultural style. In this work, we build a motion descriptor based on a deep understanding of Laban theory. The proposed descriptor builds up on previous works and encodes experiential features by using temporal windows. We present a more conceptually elaborate formulation of Laban theory and test it in a relatively new domain of behavioral research with applications in human-machine interaction. The recognition of affective human communication may be used to provide developers with a rich source of information for creating systems that are capable of interacting well with humans. We test our algorithm on UCLIC dataset which consists of body motions of 13 non-professional actors portraying angry, fear, happy and sad emotions. We achieve an accuracy of 87.30% on this dataset.
Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.
Seymour, Kiley J; Clifford, Colin W G
2012-05-01
Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.
Human Modeling for Ground Processing Human Factors Engineering Analysis
NASA Technical Reports Server (NTRS)
Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim
2011-01-01
There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs
Rapacchi, Stanislas; Wen, Han; Viallon, Magalie; Grenier, Denis; Kellman, Peter; Croisille, Pierre; Pai, Vinay M
2011-12-01
Diffusion-weighted imaging (DWI) using low b-values permits imaging of intravoxel incoherent motion in tissues. However, low b-value DWI of the human heart has been considered too challenging because of additional signal loss due to physiological motion, which reduces both signal intensity and the signal-to-noise ratio (SNR). We address these signal loss concerns by analyzing cardiac motion during a heartbeat to determine the time-window during which cardiac bulk motion is minimal. Using this information to optimize the acquisition of DWI data and combining it with a dedicated image processing approach has enabled us to develop a novel low b-value diffusion-weighted cardiac magnetic resonance imaging approach, which significantly reduces intravoxel incoherent motion measurement bias introduced by motion. Simulations from displacement encoded motion data sets permitted the delineation of an optimal time-window with minimal cardiac motion. A number of single-shot repetitions of low b-value DWI cardiac magnetic resonance imaging data were acquired during this time-window under free-breathing conditions with bulk physiological motion corrected for by using nonrigid registration. Principal component analysis (PCA) was performed on the registered images to improve the SNR, and temporal maximum intensity projection (TMIP) was applied to recover signal intensity from time-fluctuant motion-induced signal loss. This PCATMIP method was validated with experimental data, and its benefits were evaluated in volunteers before being applied to patients. Optimal time-window cardiac DWI in combination with PCATMIP postprocessing yielded significant benefits for signal recovery, contrast-to-noise ratio, and SNR in the presence of bulk motion for both numerical simulations and human volunteer studies. Analysis of mean apparent diffusion coefficient (ADC) maps showed homogeneous values among volunteers and good reproducibility between free-breathing and breath-hold acquisitions. The PCATMIP DWI approach also indicated its potential utility by detecting ADC variations in acute myocardial infarction patients. Studying cardiac motion may provide an appropriate strategy for minimizing the impact of bulk motion on cardiac DWI. Applying PCATMIP image processing improves low b-value DWI and enables reliable analysis of ADC in the myocardium. The use of a limited number of repetitions in a free-breathing mode also enables easier application in clinical conditions.
Human comfort response to dominant random motions in longitudinal modes of aircraft motion
NASA Technical Reports Server (NTRS)
Stone, R. W., Jr.
1980-01-01
The effects of random vertical and longitudinal accelerations and pitching velocity passenger ride comfort responses were examined on the NASA Langley Visual Motion Simulator. Effects of power spectral density shape were studied for motions where the peak was between 0 and 2 Hz. The subjective rating data and the physical motion data obtained are presented without interpretation or detailed analysis. There existed motions in all other degrees of freedom as well as the particular pair of longitudinal airplane motions studied. These unwanted motions, caused by the characteristics of the simulator may have introduced some interactive effects on passenger responses.
Human movement analysis with image processing in real time
NASA Astrophysics Data System (ADS)
Fauvet, Eric; Paindavoine, Michel; Cannard, F.
1991-04-01
In the field of the human sciences, a lot of applications needs to know the kinematic characteristics of the human movements Psycology is associating the characteristics with the control mechanism, sport and biomechariics are associating them with the performance of the sportman or of the patient. So the trainers or the doctors can correct the gesture of the subject to obtain a better performance if he knows the motion properties. Roherton's studies show the children motion evolution2 . Several investigations methods are able to measure the human movement But now most of the studies are based on image processing. Often the systems are working at the T.V. standard (50 frame per secund ). they permit only to study very slow gesture. A human operator analyses the digitizing sequence of the film manually giving a very expensive, especially long and unprecise operation. On these different grounds many human movement analysis systems were implemented. They consist of: - markers which are fixed to the anatomical interesting points on the subject in motion, - Image compression which is the art to coding picture data. Generally the compression Is limited to the centroid coordinates calculation tor each marker. These systems differ from one other in image acquisition and markers detection.
Head Motion Modeling for Human Behavior Analysis in Dyadic Interaction
Xiao, Bo; Georgiou, Panayiotis; Baucom, Brian; Narayanan, Shrikanth S.
2015-01-01
This paper presents a computational study of head motion in human interaction, notably of its role in conveying interlocutors’ behavioral characteristics. Head motion is physically complex and carries rich information; current modeling approaches based on visual signals, however, are still limited in their ability to adequately capture these important properties. Guided by the methodology of kinesics, we propose a data driven approach to identify typical head motion patterns. The approach follows the steps of first segmenting motion events, then parametrically representing the motion by linear predictive features, and finally generalizing the motion types using Gaussian mixture models. The proposed approach is experimentally validated using video recordings of communication sessions from real couples involved in a couples therapy study. In particular we use the head motion model to classify binarized expert judgments of the interactants’ specific behavioral characteristics where entrainment in head motion is hypothesized to play a role: Acceptance, Blame, Positive, and Negative behavior. We achieve accuracies in the range of 60% to 70% for the various experimental settings and conditions. In addition, we describe a measure of motion similarity between the interaction partners based on the proposed model. We show that the relative change of head motion similarity during the interaction significantly correlates with the expert judgments of the interactants’ behavioral characteristics. These findings demonstrate the effectiveness of the proposed head motion model, and underscore the promise of analyzing human behavioral characteristics through signal processing methods. PMID:26557047
Precision and repeatability of the Optotrak 3020 motion measurement system.
States, R A; Pappas, E
2006-01-01
Several motion analysis systems are used by researchers to quantify human motion and to perform accurate surgical procedures. The Optotrak 3020 is one of these systems and despite its widespread use there is not any published information on its precision and repeatability. We used a repeated measures design study to evaluate the precision and repeatability of the Optotrak 3020 by measuring distance and angle in three sessions, four distances and three conditions (motion, static vertical, and static tilted). Precision and repeatability were found to be excellent for both angle and distance although they decreased with increasing distance from the sensors and with tilt from the plane of the sensors. Motion did not have a significant effect on the precision of the measurements. In conclusion, the measurement error of the Optotrak is minimal. Further studies are needed to evaluate its precision and repeatability under human motion conditions.
NASA Technical Reports Server (NTRS)
Gallenstein, J.; Huston, R. L.
1973-01-01
This paper presents an analysis of swimming motion with specific attention given to the flutter kick, the breast-stroke kick, and the breast stroke. The analysis is completely theoretical. It employs a mathematical model of the human body consisting of frustrums of elliptical cones. Dynamical equations are written for this model including both viscous and inertia forces. These equations are then applied with approximated swimming strokes and solved numerically using a digital computer. The procedure is to specify the input of the swimming motion. The computer solution then provides the output displacement, velocity, and rotation or body roll of the swimmer.
Occupant Motion Sensors : Methods of Detection and Analysis
DOT National Transportation Integrated Search
1971-07-01
A STUDY HAS BEEN MADE OF METHODS FOR MEASURING OCCUPANT MOTION WITHIN A VEHICLE DURING CRASH OR IMPACT CONDITIONS. THE PURPOSE OF THE MEASUREMENTS IS TO EVALUATE RESTRAINT SYSTEMS, USING ANTHROPOMETRIC DUMMY, ANIMAL, OR HUMAN OCCUPANTS. A LIST OF GEN...
Li, Zhi; Milutinović, Dejan; Rosen, Jacob
2017-05-01
Reach-to-grasp arm postures differ from those in pure reaching because they are affected by grasp position/orientation, rather than simple transport to a position during a reaching motion. This paper investigates this difference via an analysis of experimental data collected on reaching and reach-to-grasp motions. A seven-degree-of-freedom (DOFs) kinematic arm model with the swivel angle is used for the motion analysis. Compared to a widely used anatomical arm model, this model distinguishes clearly the four grasping-relevant DOFs (GR-DOFs) that are affected by positions and orientations of the objects to be grasped. These four GR-DOFs include the swivel angle that measures the elbow rotation about the shoulder-wrist axis, and three wrist joint angles. For each GR-DOF, we quantify position vs orientation task-relevance bias that measures how much the DOF is affected by the grasping position vs orientation. The swivel angle and forearm supination have similar bias, and the analysis of their motion suggests two hypotheses regarding the synergistic coordination of the macro- and micro-structures of the human arm (1) DOFs with similar task-relevance are synergistically coordinated; and (2) such synergy breaks when a task-relevant DOF is close to its joint limit without necessarily reaching the limit. This study provides a motion analysis method to reduce the control complexity for reach-to-grasp tasks, and suggests using dynamic coupling to coordinate the hand and arm of upper-limb exoskeletons.
Training industrial robots with gesture recognition techniques
NASA Astrophysics Data System (ADS)
Piane, Jennifer; Raicu, Daniela; Furst, Jacob
2013-01-01
In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.
Finite element analysis of moment-rotation relationships for human cervical spine.
Zhang, Qing Hang; Teo, Ee Chon; Ng, Hong Wan; Lee, Vee Sin
2006-01-01
A comprehensive, geometrically accurate, nonlinear C0-C7 FE model of head and cervical spine based on the actual geometry of a human cadaver specimen was developed. The motions of each cervical vertebral level under pure moment loading of 1.0 Nm applied incrementally on the skull to simulate the movements of the head and cervical spine under flexion, tension, axial rotation and lateral bending with the inferior surface of the C7 vertebral body fully constrained were analysed. The predicted range of motion (ROM) for each motion segment were computed and compared with published experimental data. The model predicted the nonlinear moment-rotation relationship of human cervical spine. Under the same loading magnitude, the model predicted the largest rotation in extension, followed by flexion and axial rotation, and least ROM in lateral bending. The upper cervical spines are more flexible than the lower cervical levels. The motions of the two uppermost motion segments account for half (or even higher) of the whole cervical spine motion under rotational loadings. The differences in the ROMs among the lower cervical spines (C3-C7) were relatively small. The FE predicted segmental motions effectively reflect the behavior of human cervical spine and were in agreement with the experimental data. The C0-C7 FE model offers potentials for biomedical and injury studies.
Development of Skylab experiment T-013 crew/vehicle disturbances
NASA Technical Reports Server (NTRS)
Conway, B. A.; Woolley, C. T.; Kurzhals, P. R.; Reynolds, R. B.
1972-01-01
A Skylab experiment to determine the characteristics and effects of crew-motion disturbances was developed. The experiment will correlate data from histories of specified astronaut body motions, the disturbance forces and torques produced by these motions, and the resultant spacecraft control system response to the disturbances. Primary application of crew-motion disturbance data will be to the sizing and design of future manned spacecraft control and stabilization systems. The development of the crew/vehicle disturbances experiment is described, and a mathematical model of human body motion which may be used for analysis of a variety of man-motion activities is derived.
Computerized method to compensate for breathing body motion in dynamic chest radiographs
NASA Astrophysics Data System (ADS)
Matsuda, H.; Tanaka, R.; Sanada, S.
2017-03-01
Dynamic chest radiography combined with computer analysis allows quantitative analyses on pulmonary function and rib motion. The accuracy of kinematic analysis is directly linked to diagnostic accuracy, and thus body motion compensation is a major concern. Our purpose in this study was to develop a computerized method to reduce a breathing body motion in dynamic chest radiographs. Dynamic chest radiographs of 56 patients were obtained using a dynamic flat-panel detector. The images were divided into a 1 cm-square and the squares on body counter were used to detect the body motion. Velocity vector was measured using cross-correlation method on the body counter and the body motion was then determined on the basis of the summation of motion vector. The body motion was then compensated by shifting the images based on the measured vector. By using our method, the body motion was accurately detected by the order of a few pixels in clinical cases, mean 82.5% in right and left directions. In addition, our method detected slight body motion which was not able to be identified by human observations. We confirmed our method effectively worked in kinetic analysis of rib motion. The present method would be useful for the reduction of a breathing body motion in dynamic chest radiography.
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2017-05-01
Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
Understanding Human Motion Skill with Peak Timing Synergy
NASA Astrophysics Data System (ADS)
Ueno, Ken; Furukawa, Koichi
The careful observation of motion phenomena is important in understanding the skillful human motion. However, this is a difficult task due to the complexities in timing when dealing with the skilful control of anatomical structures. To investigate the dexterity of human motion, we decided to concentrate on timing with respect to motion, and we have proposed a method to extract the peak timing synergy from multivariate motion data. The peak timing synergy is defined as a frequent ordered graph with time stamps, which has nodes consisting of turning points in motion waveforms. A proposed algorithm, PRESTO automatically extracts the peak timing synergy. PRESTO comprises the following 3 processes: (1) detecting peak sequences with polygonal approximation; (2) generating peak-event sequences; and (3) finding frequent peak-event sequences using a sequential pattern mining method, generalized sequential patterns (GSP). Here, we measured right arm motion during the task of cello bowing and prepared a data set of the right shoulder and arm motion. We successfully extracted the peak timing synergy on cello bowing data set using the PRESTO algorithm, which consisted of common skills among cellists and personal skill differences. To evaluate the sequential pattern mining algorithm GSP in PRESTO, we compared the peak timing synergy by using GSP algorithm and the one by using filtering by reciprocal voting (FRV) algorithm as a non time-series method. We found that the support is 95 - 100% in GSP, while 83 - 96% in FRV and that the results by GSP are better than the one by FRV in the reproducibility of human motion. Therefore we show that sequential pattern mining approach is more effective to extract the peak timing synergy than non-time series analysis approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry
This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics naturalmore » human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.« less
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
NASA Astrophysics Data System (ADS)
Telban, Robert J.
While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Saito, Toshikuni; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Hayashibe, Mitsuhiro; Otake, Yoshito
2006-01-01
We have been developing a DSVC (Dynamic Spatial Video Camera) system to measure and observe human locomotion quantitatively and freely. A 4D (four-dimensional) human model with detailed skeletal structure, joint, muscle, and motor functionality has been built. The purpose of our research was to estimate skeletal movements from body surface shapes using DSVC and the 4D human model. For this purpose, we constructed a body surface model of a subject and resized the standard 4D human model to match with geometrical features of the subject's body surface model. Software that integrates the DSVC system and the 4D human model, and allows dynamic skeletal state analysis from body surface movement data was also developed. We practically applied the developed system in dynamic skeletal state analysis of a lower limb in motion and were able to visualize the motion using geometrically resized standard 4D human model.
1-D blood flow modelling in a running human body.
Szabó, Viktor; Halász, Gábor
2017-07-01
In this paper an attempt was made to simulate blood flow in a mobile human arterial network, specifically, in a running human subject. In order to simulate the effect of motion, a previously published immobile 1-D model was modified by including an inertial force term into the momentum equation. To calculate inertial force, gait analysis was performed at different levels of speed. Our results show that motion has a significant effect on the amplitudes of the blood pressure and flow rate but the average values are not effected significantly.
Postural control during quiet bipedal standing in rats
Sato, Yota; Fujiki, Soichiro; Sato, Yamato; Aoi, Shinya; Tsuchiya, Kazuo; Yanagihara, Dai
2017-01-01
The control of bipedal posture in humans is subject to non-ideal conditions such as delayed sensation and heartbeat noise. However, the controller achieves a high level of functionality by utilizing body dynamics dexterously. In order to elucidate the neural mechanism responsible for postural control, the present study made use of an experimental setup involving rats because they have more accessible neural structures. The experimental design requires rats to stand bipedally in order to obtain a water reward placed in a water supplier above them. Their motions can be measured in detail using a motion capture system and a force plate. Rats have the ability to stand bipedally for long durations (over 200 s), allowing for the construction of an experimental environment in which the steady standing motion of rats could be measured. The characteristics of the measured motion were evaluated based on aspects of the rats’ intersegmental coordination and power spectrum density (PSD). These characteristics were compared with those of the human bipedal posture. The intersegmental coordination of the standing rats included two components that were similar to that of standing humans: center of mass and trunk motion. The rats’ PSD showed a peak at approximately 1.8 Hz and the pattern of the PSD under the peak frequency was similar to that of the human PSD. However, the frequencies were five times higher in rats than in humans. Based on the analysis of the rats’ bipedal standing motion, there were some common characteristics between rat and human standing motions. Thus, using standing rats is expected to be a powerful tool to reveal the neural basis of postural control. PMID:29244818
3D surface perception from motion involves a temporal–parietal network
Beer, Anton L.; Watanabe, Takeo; Ni, Rui; Sasaki, Yuka; Andersen, George J.
2010-01-01
Previous research has suggested that three-dimensional (3D) structure-from-motion (SFM) perception in humans involves several motion-sensitive occipital and parietal brain areas. By contrast, SFM perception in nonhuman primates seems to involve the temporal lobe including areas MT, MST and FST. The present functional magnetic resonance imaging study compared several motion-sensitive regions of interest including the superior temporal sulcus (STS) while human observers viewed horizontally moving dots that defined either a 3D corrugated surface or a 3D random volume. Low-level stimulus features such as dot density and velocity vectors as well as attention were tightly controlled. Consistent with previous research we found that 3D corrugated surfaces elicited stronger responses than random motion in occipital and parietal brain areas including area V3A, the ventral and dorsal intraparietal sulcus, the lateral occipital sulcus and the fusiform gyrus. Additionally, 3D corrugated surfaces elicited stronger activity in area MT and the STS but not in area MST. Brain activity in the STS but not in area MT correlated with interindividual differences in 3D surface perception. Our findings suggest that area MT is involved in the analysis of optic flow patterns such as speed gradients and that the STS in humans plays a greater role in the analysis of 3D SFM than previously thought. PMID:19674088
The Relationship Between Pitching Mechanics and Injury: A Review of Current Concepts
Chalmers, Peter N.; Wimmer, Markus A.; Verma, Nikhil N.; Cole, Brian J.; Romeo, Anthony A.; Cvetanovich, Gregory L.; Pearl, Michael L.; Chalmers, Peter N.; Wimmer, Markus A.; Verma, Nikhil N.; Cole, Brian J.; Romeo, Anthony A.; Cvetanovich, Gregory L.; Pearl, Michael L.
2017-01-01
Context: The overhand pitch is one of the fastest known human motions and places enormous forces and torques on the upper extremity. Shoulder and elbow pain and injury are common in high-level pitchers. A large body of research has been conducted to understand the pitching motion. Evidence Acquisition: A comprehensive review of the literature was performed to gain a full understanding of all currently available biomechanical and clinical evidence surrounding pitching motion analysis. These motion analysis studies use video motion analysis, electromyography, electromagnetic sensors, and markered motion analysis. This review includes studies performed between 1983 and 2016. Study Design: Clinical review. Level of Evidence: Level 5. Results: The pitching motion is a kinetic chain, in which the force generated by the large muscles of the lower extremity and trunk during the wind-up and stride phases are transferred to the ball through the shoulder and elbow during the cocking and acceleration phases. Numerous kinematic factors have been identified that increase shoulder and elbow torques, which are linked to increased risk for injury. Conclusion: Altered knee flexion at ball release, early trunk rotation, loss of shoulder rotational range of motion, increased elbow flexion at ball release, high pitch velocity, and increased pitcher fatigue may increase shoulder and elbow torques and risk for injury. PMID:28107113
Numerical simulation of artificial hip joint motion based on human age factor
NASA Astrophysics Data System (ADS)
Ramdhani, Safarudin; Saputra, Eko; Jamari, J.
2018-05-01
Artificial hip joint is a prosthesis (synthetic body part) which usually consists of two or more components. Replacement of the hip joint due to the occurrence of arthritis, ordinarily patients aged or older. Numerical simulation models are used to observe the range of motion in the artificial hip joint, the range of motion of joints used as the basis of human age. Finite- element analysis (FEA) is used to calculate stress von mises in motion and observes a probability of prosthetic impingement. FEA uses a three-dimensional nonlinear model and considers the position variation of acetabular liner cups. The result of numerical simulation shows that FEA method can be used to analyze the performance calculation of the artificial hip joint at this time more accurate than conventional method.
Beil, Jonas; Marquardt, Charlotte; Asfour, Tamim
2017-07-01
Kinematic compatibility is of paramount importance in wearable robotic and exoskeleton design. Misalignments between exoskeletons and anatomical joints of the human body result in interaction forces which make wearing the exoskeleton uncomfortable and even dangerous for the human. In this paper we present a kinematically compatible design of an exoskeleton hip to reduce kinematic incompatibilities, so called macro- and micro-misalignments, between the human's and exoskeleton's joint axes, which are caused by inter-subject variability and articulation. The resulting design consists of five revolute, three prismatic and one ball joint. Design parameters such as range of motion and joint velocities are calculated based on the analysis of human motion data acquired by motion capture systems. We show that the resulting design is capable of self-aligning to the human hip joint in all three anatomical planes during operation and can be adapted along the dorsoventral and mediolateral axis prior to operation. Calculation of the forward kinematics and FEM-simulation considering kinematic and musculoskeletal constraints proved sufficient mobility and stiffness of the system regarding the range of motion, angular velocity and torque admissibility needed to provide 50 % assistance for an 80 kg person.
On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.
Shao, Zhanpeng; Li, Youfu
2016-02-01
Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.
Experimental evaluation of a system for human life detection under debris
NASA Astrophysics Data System (ADS)
Joju, Reshma; Konica, Pimplapure Ramya T.; Alex, Zachariah C.
2017-11-01
It is difficult to for the human beings to be found under debris or behind the walls in case of military applications. Due to which several rescue techniques such as robotic systems, optical devices, and acoustic devices were used. But if victim was unconscious then these rescue system failed. We conducted an experimental analysis on whether the microwaves could detect heart beat and breathing signals of human beings trapped under collapsed debris. For our analysis we used RADAR based on by Doppler shift effect. We calculated the minimum speed that the RADAR could detect. We checked the frequency variation by placing the RADAR at a fixed position and placing the object in motion at different distances. We checked the frequency variation by using objects of different materials as debris behind which the motion was made. The graphs of different analysis were plotted.
NASA Astrophysics Data System (ADS)
Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi
2015-04-01
The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chamberlin, Kent; Smith, Wayne; Chirgwin, Christopher; Appasani, Seshank; Rioux, Paul
2014-12-01
The purpose of this study was to investigate "earthing" from an electrical perspective through measurement and analysis of the naturally occurring electron flow between the human body or a control and ground as this relates to the magnitude of the charge exchange, the relationship between the charge exchange and body functions (respiration and heart rate), and the detection of other information that might be contained in the charge exchange. Sensitive, low-noise instrumentation was designed and fabricated to measure low-level current flow at low frequencies. This instrumentation was used to record current flow between human subjects or a control and ground, and these measurements were performed approximately 40 times under varied circumstances. The results of these measurements were analyzed to determine if information was contained in the current exchange. The currents flowing between the human body and ground were small (nanoamperes), and they correlated with subject motion. There did not appear to be any information contained in this exchange except for information about subject motion. This study showed that currents flow between the environment (earth) and a grounded human body; however, these currents are small (nanoamperes) and do not appear to contain information other than information about subject motion.
Chamberlin, Kent; Smith, Wayne; Chirgwin, Christopher; Appasani, Seshank; Rioux, Paul
2014-01-01
Objective The purpose of this study was to investigate “earthing” from an electrical perspective through measurement and analysis of the naturally occurring electron flow between the human body or a control and ground as this relates to the magnitude of the charge exchange, the relationship between the charge exchange and body functions (respiration and heart rate), and the detection of other information that might be contained in the charge exchange. Methods Sensitive, low-noise instrumentation was designed and fabricated to measure low-level current flow at low frequencies. This instrumentation was used to record current flow between human subjects or a control and ground, and these measurements were performed approximately 40 times under varied circumstances. The results of these measurements were analyzed to determine if information was contained in the current exchange. Results The currents flowing between the human body and ground were small (nanoamperes), and they correlated with subject motion. There did not appear to be any information contained in this exchange except for information about subject motion. Conclusions This study showed that currents flow between the environment (earth) and a grounded human body; however, these currents are small (nanoamperes) and do not appear to contain information other than information about subject motion. PMID:25435837
3D Human Motion Editing and Synthesis: A Survey
Wang, Xin; Chen, Qiudi; Wang, Wanliang
2014-01-01
The ways to compute the kinematics and dynamic quantities of human bodies in motion have been studied in many biomedical papers. This paper presents a comprehensive survey of 3D human motion editing and synthesis techniques. Firstly, four types of methods for 3D human motion synthesis are introduced and compared. Secondly, motion capture data representation, motion editing, and motion synthesis are reviewed successively. Finally, future research directions are suggested. PMID:25045395
Takano, Wataru; Kusajima, Ikuo; Nakamura, Yoshihiko
2016-08-01
It is desirable for robots to be able to linguistically understand human actions during human-robot interactions. Previous research has developed frameworks for encoding human full body motion into model parameters and for classifying motion into specific categories. For full understanding, the motion categories need to be connected to the natural language such that the robots can interpret human motions as linguistic expressions. This paper proposes a novel framework for integrating observation of human motion with that of natural language. This framework consists of two models; the first model statistically learns the relations between motions and their relevant words, and the second statistically learns sentence structures as word n-grams. Integration of these two models allows robots to generate sentences from human motions by searching for words relevant to the motion using the first model and then arranging these words in appropriate order using the second model. This allows making sentences that are the most likely to be generated from the motion. The proposed framework was tested on human full body motion measured by an optical motion capture system. In this, descriptive sentences were manually attached to the motions, and the validity of the system was demonstrated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Translation and articulation in biological motion perception.
Masselink, Jana; Lappe, Markus
2015-08-01
Recent models of biological motion processing focus on the articulational aspect of human walking investigated by point-light figures walking in place. However, in real human walking, the change in the position of the limbs relative to each other (referred to as articulation) results in a change of body location in space over time (referred to as translation). In order to examine the role of this translational component on the perception of biological motion we designed three psychophysical experiments of facing (leftward/rightward) and articulation discrimination (forward/backward and leftward/rightward) of a point-light walker viewed from the side, varying translation direction (relative to articulation direction), the amount of local image motion, and trial duration. In a further set of a forward/backward and a leftward/rightward articulation task, we additionally tested the influence of translational speed, including catch trials without articulation. We found a perceptual bias in translation direction in all three discrimination tasks. In the case of facing discrimination the bias was limited to short stimulus presentation. Our results suggest an interaction of articulation analysis with the processing of translational motion leading to best articulation discrimination when translational direction and speed match articulation. Moreover, we conclude that the global motion of the center-of-mass of the dot pattern is more relevant to processing of translation than the local motion of the dots. Our findings highlight that translation is a relevant cue that should be integrated in models of human motion detection.
Visual event-related potentials to biological motion stimuli in autism spectrum disorders
Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan
2014-01-01
Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808
Event Recognition for Contactless Activity Monitoring Using Phase-Modulated Continuous Wave Radar.
Forouzanfar, Mohamad; Mabrouk, Mohamed; Rajan, Sreeraman; Bolic, Miodrag; Dajani, Hilmi R; Groza, Voicu Z
2017-02-01
The use of remote sensing technologies such as radar is gaining popularity as a technique for contactless detection of physiological signals and analysis of human motion. This paper presents a methodology for classifying different events in a collection of phase modulated continuous wave radar returns. The primary application of interest is to monitor inmates where the presence of human vital signs amidst different, interferences needs to be identified. A comprehensive set of features is derived through time and frequency domain analyses of the radar returns. The Bhattacharyya distance is used to preselect the features with highest class separability as the possible candidate features for use in the classification process. The uncorrelated linear discriminant analysis is performed to decorrelate, denoise, and reduce the dimension of the candidate feature set. Linear and quadratic Bayesian classifiers are designed to distinguish breathing, different human motions, and nonhuman motions. The performance of these classifiers is evaluated on a pilot dataset of radar returns that contained different events including breathing, stopped breathing, simple human motions, and movement of fan and water. Our proposed pattern classification system achieved accuracies of up to 93% in stationary subject detection, 90% in stop-breathing detection, and 86% in interference detection. Our proposed radar pattern recognition system was able to accurately distinguish the predefined events amidst interferences. Besides inmate monitoring and suicide attempt detection, this paper can be extended to other radar applications such as home-based monitoring of elderly people, apnea detection, and home occupancy detection.
Accuracy of human motion capture systems for sport applications; state-of-the-art review.
van der Kruk, Eline; Reijne, Marco M
2018-05-09
Sport research often requires human motion capture of an athlete. It can, however, be labour-intensive and difficult to select the right system, while manufacturers report on specifications which are determined in set-ups that largely differ from sport research in terms of volume, environment and motion. The aim of this review is to assist researchers in the selection of a suitable motion capture system for their experimental set-up for sport applications. An open online platform is initiated, to support (sport)researchers in the selection of a system and to enable them to contribute and update the overview. systematic review; Method: Electronic searches in Scopus, Web of Science and Google Scholar were performed, and the reference lists of the screened articles were scrutinised to determine human motion capture systems used in academically published studies on sport analysis. An overview of 17 human motion capture systems is provided, reporting the general specifications given by the manufacturer (weight and size of the sensors, maximum capture volume, environmental feasibilities), and calibration specifications as determined in peer-reviewed studies. The accuracy of each system is plotted against the measurement range. The overview and chart can assist researchers in the selection of a suitable measurement system. To increase the robustness of the database and to keep up with technological developments, we encourage researchers to perform an accuracy test prior to their experiment and to add to the chart and the system overview (online, open access).
Influence of the model's degree of freedom on human body dynamics identification.
Maita, Daichi; Venture, Gentiane
2013-01-01
In fields of sports and rehabilitation, opportunities of using motion analysis of the human body have dramatically increased. To analyze the motion dynamics, a number of subject specific parameters and measurements are required. For example the contact forces measurement and the inertial parameters of each segment of the human body are necessary to compute the joint torques. In this study, in order to perform accurate dynamic analysis we propose to identify the inertial parameters of the human body and to evaluate the influence of the model's number of degrees of freedom (DoF) on the results. We use a method to estimate the inertial parameters without torque sensor, using generalized coordinates of the base link, joint angles and external forces information. We consider a 34DoF model, a 58DoF model, as well as the case when the human is manipulating a tool (here a tennis racket). We compare the obtained in results in terms of contact force estimation.
Phantom motion after effects--evidence of detectors for the analysis of optic flow.
Snowden, R J; Milne, A B
1997-10-01
Electrophysiological recording from the extrastriate cortex of non-human primates has revealed neurons that have large receptive fields and are sensitive to various components of object or self movement, such as translations, rotations and expansion/contractions. If these mechanisms exist in human vision, they might be susceptible to adaptation that generates motion aftereffects (MAEs). Indeed, it might be possible to adapt the mechanism in one part of the visual field and reveal what we term a 'phantom MAE' in another part. The existence of phantom MAEs was probed by adapting to a pattern that contained motion in only two non-adjacent 'quarter' segments and then testing using patterns that had elements in only the other two segments. We also tested for the more conventional 'concrete' MAE by testing in the same two segments that had adapted. The strength of each MAE was quantified by measuring the percentage of dots that had to be moved in the opposite direction to the MAE in order to nullify it. Four experiments tested rotational motion, expansion/contraction motion, translational motion and a 'rotation' that consisted simply of the two segments that contained only translational motions of opposing direction. Compared to a baseline measurement where no adaptation took place, all subjects in all experiments exhibited both concrete and phantom MAEs, with the size of the latter approximately half that of the former. Adaptation to two segments that contained upward and downward motion induced the perception of leftward and rightward motion in another part of the visual field. This strongly suggests there are mechanisms in human vision that are sensitive to complex motions such as rotations.
Human Classification Based on Gestural Motions by Using Components of PCA
NASA Astrophysics Data System (ADS)
Aziz, Azri A.; Wan, Khairunizam; Za'aba, S. K.; B, Shahriman A.; Adnan, Nazrul H.; H, Asyekin; R, Zuradzman M.
2013-12-01
Lately, a study of human capabilities with the aim to be integrated into machine is the famous topic to be discussed. Moreover, human are bless with special abilities that they can hear, see, sense, speak, think and understand each other. Giving such abilities to machine for improvement of human life is researcher's aim for better quality of life in the future. This research was concentrating on human gesture, specifically arm motions for differencing the individuality which lead to the development of the hand gesture database. We try to differentiate the human physical characteristic based on hand gesture represented by arm trajectories. Subjects are selected from different type of the body sizes, and then acquired data undergo resampling process. The results discuss the classification of human based on arm trajectories by using Principle Component Analysis (PCA).
Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing
NASA Astrophysics Data System (ADS)
Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion
Filippeschi, Alessandro; Schmitz, Norbert; Miezal, Markus; Bleser, Gabriele; Ruffaldi, Emanuele; Stricker, Didier
2017-01-01
Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error). PMID:28587178
Perception of Social Interactions for Spatially Scrambled Biological Motion
Thurman, Steven M.; Lu, Hongjing
2014-01-01
It is vitally important for humans to detect living creatures in the environment and to analyze their behavior to facilitate action understanding and high-level social inference. The current study employed naturalistic point-light animations to examine the ability of human observers to spontaneously identify and discriminate socially interactive behaviors between two human agents. Specifically, we investigated the importance of global body form, intrinsic joint movements, extrinsic whole-body movements, and critically, the congruency between intrinsic and extrinsic motions. Motion congruency is hypothesized to be particularly important because of the constraint it imposes on naturalistic action due to the inherent causal relationship between limb movements and whole body motion. Using a free response paradigm in Experiment 1, we discovered that many naïve observers (55%) spontaneously attributed animate and/or social traits to spatially-scrambled displays of interpersonal interaction. Total stimulus motion energy was strongly correlated with the likelihood that an observer would attribute animate/social traits, as opposed to physical/mechanical traits, to the scrambled dot stimuli. In Experiment 2, we found that participants could identify interactions between spatially-scrambled displays of human dance as long as congruency was maintained between intrinsic/extrinsic movements. Violating the motion congruency constraint resulted in chance discrimination performance for the spatially-scrambled displays. Finally, Experiment 3 showed that scrambled point-light dancing animations violating this constraint were also rated as significantly less interactive than animations with congruent intrinsic/extrinsic motion. These results demonstrate the importance of intrinsic/extrinsic motion congruency for biological motion analysis, and support a theoretical framework in which early visual filters help to detect animate agents in the environment based on several fundamental constraints. Only after satisfying these basic constraints could stimuli be evaluated for high-level social content. In this way, we posit that perceptual animacy may serve as a gateway to higher-level processes that support action understanding and social inference. PMID:25406075
Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada
2013-01-01
Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031
Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.
2016-01-01
Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276
Induction and separation of motion artifacts in EEG data using a mobile phantom head device.
Oliveira, Anderson S; Schlink, Bryan R; Hairston, W David; König, Peter; Ferris, Daniel P
2016-06-01
Electroencephalography (EEG) can assess brain activity during whole-body motion in humans but head motion can induce artifacts that obfuscate electrocortical signals. Definitive solutions for removing motion artifact from EEG have yet to be found, so creating methods to assess signal processing routines for removing motion artifact are needed. We present a novel method for investigating the influence of head motion on EEG recordings as well as for assessing the efficacy of signal processing approaches intended to remove motion artifact. We used a phantom head device to mimic electrical properties of the human head with three controlled dipolar sources of electrical activity embedded in the phantom. We induced sinusoidal vertical motions on the phantom head using a custom-built platform and recorded EEG signals with three different acquisition systems while the head was both stationary and in varied motion conditions. Recordings showed up to 80% reductions in signal-to-noise ratio (SNR) and up to 3600% increases in the power spectrum as a function of motion amplitude and frequency. Independent component analysis (ICA) successfully isolated the three dipolar sources across all conditions and systems. There was a high correlation (r > 0.85) and marginal increase in the independent components' (ICs) power spectrum (∼15%) when comparing stationary and motion parameters. The SNR of the IC activation was 400%-700% higher in comparison to the channel data SNR, attenuating the effects of motion on SNR. Our results suggest that the phantom head and motion platform can be used to assess motion artifact removal algorithms and compare different EEG systems for motion artifact sensitivity. In addition, ICA is effective in isolating target electrocortical events and marginally improving SNR in relation to stationary recordings.
Induction and separation of motion artifacts in EEG data using a mobile phantom head device
NASA Astrophysics Data System (ADS)
Oliveira, Anderson S.; Schlink, Bryan R.; Hairston, W. David; König, Peter; Ferris, Daniel P.
2016-06-01
Objective. Electroencephalography (EEG) can assess brain activity during whole-body motion in humans but head motion can induce artifacts that obfuscate electrocortical signals. Definitive solutions for removing motion artifact from EEG have yet to be found, so creating methods to assess signal processing routines for removing motion artifact are needed. We present a novel method for investigating the influence of head motion on EEG recordings as well as for assessing the efficacy of signal processing approaches intended to remove motion artifact. Approach. We used a phantom head device to mimic electrical properties of the human head with three controlled dipolar sources of electrical activity embedded in the phantom. We induced sinusoidal vertical motions on the phantom head using a custom-built platform and recorded EEG signals with three different acquisition systems while the head was both stationary and in varied motion conditions. Main results. Recordings showed up to 80% reductions in signal-to-noise ratio (SNR) and up to 3600% increases in the power spectrum as a function of motion amplitude and frequency. Independent component analysis (ICA) successfully isolated the three dipolar sources across all conditions and systems. There was a high correlation (r > 0.85) and marginal increase in the independent components’ (ICs) power spectrum (˜15%) when comparing stationary and motion parameters. The SNR of the IC activation was 400%-700% higher in comparison to the channel data SNR, attenuating the effects of motion on SNR. Significance. Our results suggest that the phantom head and motion platform can be used to assess motion artifact removal algorithms and compare different EEG systems for motion artifact sensitivity. In addition, ICA is effective in isolating target electrocortical events and marginally improving SNR in relation to stationary recordings.
Two-character motion analysis and synthesis.
Kwon, Taesoo; Cho, Young-Sang; Park, Sang Il; Shin, Sung Yong
2008-01-01
In this paper, we deal with the problem of synthesizing novel motions of standing-up martial arts such as Kickboxing, Karate, and Taekwondo performed by a pair of human-like characters while reflecting their interactions. Adopting an example-based paradigm, we address three non-trivial issues embedded in this problem: motion modeling, interaction modeling, and motion synthesis. For the first issue, we present a semi-automatic motion labeling scheme based on force-based motion segmentation and learning-based action classification. We also construct a pair of motion transition graphs each of which represents an individual motion stream. For the second issue, we propose a scheme for capturing the interactions between two players. A dynamic Bayesian network is adopted to build a motion transition model on top of the coupled motion transition graph that is constructed from an example motion stream. For the last issue, we provide a scheme for synthesizing a novel sequence of coupled motions, guided by the motion transition model. Although the focus of the present work is on martial arts, we believe that the framework of the proposed approach can be conveyed to other two-player motions as well.
Web-based tools for modelling and analysis of multivariate data: California ozone pollution activity
Dinov, Ivo D.; Christou, Nicolas
2014-01-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges. PMID:24465054
Dinov, Ivo D; Christou, Nicolas
2011-09-01
This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting and statistical inference on these data are presented. All components of this case study (data, tools, activity) are freely available online at: http://wiki.stat.ucla.edu/socr/index.php/SOCR_MotionCharts_CAOzoneData. Several types of exploratory (motion charts, box-and-whisker plots, spider charts) and quantitative (inference, regression, analysis of variance (ANOVA)) data analyses tools are demonstrated. Two specific human health related questions (temporal and geographic effects of ozone pollution) are discussed as motivational challenges.
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Peng, Zhen; Genewein, Tim; Braun, Daniel A.
2014-01-01
Complexity is a hallmark of intelligent behavior consisting both of regular patterns and random variation. To quantitatively assess the complexity and randomness of human motion, we designed a motor task in which we translated subjects' motion trajectories into strings of symbol sequences. In the first part of the experiment participants were asked to perform self-paced movements to create repetitive patterns, copy pre-specified letter sequences, and generate random movements. To investigate whether the degree of randomness can be manipulated, in the second part of the experiment participants were asked to perform unpredictable movements in the context of a pursuit game, where they received feedback from an online Bayesian predictor guessing their next move. We analyzed symbol sequences representing subjects' motion trajectories with five common complexity measures: predictability, compressibility, approximate entropy, Lempel-Ziv complexity, as well as effective measure complexity. We found that subjects' self-created patterns were the most complex, followed by drawing movements of letters and self-paced random motion. We also found that participants could change the randomness of their behavior depending on context and feedback. Our results suggest that humans can adjust both complexity and regularity in different movement types and contexts and that this can be assessed with information-theoretic measures of the symbolic sequences generated from movement trajectories. PMID:24744716
NASA Astrophysics Data System (ADS)
Lee, Victor R.
2015-04-01
Biomechanics, and specifically the biomechanics associated with human movement, is a potentially rich backdrop against which educators can design innovative science teaching and learning activities. Moreover, the use of technologies associated with biomechanics research, such as high-speed cameras that can produce high-quality slow-motion video, can be deployed in such a way to support students' participation in practices of scientific modeling. As participants in classroom design experiment, fifteen fifth-grade students worked with high-speed cameras and stop-motion animation software (SAM Animation) over several days to produce dynamic models of motion and body movement. The designed series of learning activities involved iterative cycles of animation creation and critique and use of various depictive materials. Subsequent analysis of flipbooks of human jumping movements created by the students at the beginning and end of the unit revealed a significant improvement in both the epistemic fidelity of students' representations. Excerpts from classroom observations highlight the role that the teacher plays in supporting students' thoughtful reflection of and attention to slow-motion video. In total, this design and research intervention demonstrates that the combination of technologies, activities, and teacher support can lead to improvements in some of the foundations associated with students' modeling.
Models of subjective response to in-flight motion data
NASA Technical Reports Server (NTRS)
Rudrapatna, A. N.; Jacobson, I. D.
1973-01-01
Mathematical relationships between subjective comfort and environmental variables in an air transportation system are investigated. As a first step in model building, only the motion variables are incorporated and sensitivities are obtained using stepwise multiple regression analysis. The data for these models have been collected from commercial passenger flights. Two models are considered. In the first, subjective comfort is assumed to depend on rms values of the six-degrees-of-freedom accelerations. The second assumes a Rustenburg type human response function in obtaining frequency weighted rms accelerations, which are used in a linear model. The form of the human response function is examined and the results yield a human response weighting function for different degrees of freedom.
Spherical Coordinate Systems for Streamlining Suited Mobility Analysis
NASA Technical Reports Server (NTRS)
Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Rajulu. Sudhakar
2015-01-01
Introduction: When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. It has been shown that using a spherical coordinate system allows Anthropometry and Biomechanics Facility (ABF) personnel to increase their ability to transmit important human mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project was to use innovative analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify a new method before it was implemented in the ABF's data analysis practices. A mechanical test rig was built and tracked in 3D using an optical motion capture system. Its position and orientation were reported in both Euler and spherical reference systems. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder to include the rest of the joints of the body. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. These visualization methods will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development. Results: Initial results demonstrated that a spherical coordinate system is helpful in describing and visualizing the motion of a space suit. The system is particularly useful in describing the motion of the shoulder, where multiple degrees of freedom can lead to very complex motion paths.
Spherical Coordinate Systems for Streamlining Suited Mobility Analysis
NASA Technical Reports Server (NTRS)
Benson, Elizabeth; Cowley, Matthew S.; Harvill. Lauren; Rajulu, Sudhakar
2014-01-01
When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. One of our key functions is to help design engineers understand how a human will perform with new designs and all too often traditional use of Euler rotations becomes as much of a hindrance as a help. It is believed that using a spherical coordinate system will allow ABF personnel to more quickly and easily transmit important mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project is to establish new analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify the method before it was implemented in the ABF's data analysis practices. The first stage was a proof of concept, where a mechanical test rig was built and instrumented with an inclinometer, so that its angle from horizontal was known. The test rig was tracked in 3D using an optical motion capture system, and its position and orientation were reported in both Euler and spherical reference systems. The rig was meant to simulate flexion/extension, transverse rotation and abduction/adduction of the human shoulder, but without the variability inherent in human motion. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder, to include the torso, knees, ankle, elbows, wrists and neck. Part of this update included adding a representation of 'roll' about an axis, for upper arm and lower leg rotations. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. This visualization method will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development.
The 3D Human Motion Control Through Refined Video Gesture Annotation
NASA Astrophysics Data System (ADS)
Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.
In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.
Holewijn, Roderick M; de Kleuver, Marinus; van der Veen, Albert J; Emanuel, Kaj S; Bisschop, Arno; Stadhouder, Agnita; van Royen, Barend J; Kingma, Idsart
2017-08-01
Biomechanical study. Recently, a posterior concave periapical distraction device for fusionless scoliosis correction was introduced. The goal of this study was to quantify the effect of the periapical distraction device on spinal range of motion (ROM) in comparison with traditional rigid pedicle screw-rod instrumentation. Using a spinal motion simulator, 6 human spines were loaded with 4 N m and 6 porcine spines with 2 N m to induce flexion-extension (FE), lateral bending (LB), and axial rotation (AR). ROM was measured in 3 conditions: untreated, periapical distraction device, and rigid pedicle screw-rod instrumentation. The periapical distraction device caused a significant ( P < .05) decrease in ROM of FE (human, -40.0% and porcine, -55.9%) and LB (human, -18.2% and porcine, -17.9%) as compared to the untreated spine, while ROM of AR remained unaffected. In comparison, rigid instrumentation caused a significantly ( P < .05) larger decrease in ROM of FE (human, -80.9% and porcine, -94.0%), LB (human, -75.0% and porcine, -92.2%), and AR (human, -71.3% and porcine, -86.9%). Although no destructive forces were applied, no device failures were observed. Spinal ROM was significantly less constrained by the periapical distraction device compared to rigid pedicle screw-rod instrumentation. Therefore, provided that scoliosis correction is achieved, a more physiological spinal motion is expected after scoliosis correction with the posterior concave periapical distraction device.
Example-based human motion denoising.
Lou, Hui; Chai, Jinxiang
2010-01-01
With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.
One-degree-of-freedom spherical model for the passive motion of the human ankle joint.
Sancisi, Nicola; Baldisserri, Benedetta; Parenti-Castelli, Vincenzo; Belvedere, Claudio; Leardini, Alberto
2014-04-01
Mathematical modelling of mobility at the human ankle joint is essential for prosthetics and orthotic design. The scope of this study is to show that the ankle joint passive motion can be represented by a one-degree-of-freedom spherical motion. Moreover, this motion is modelled by a one-degree-of-freedom spherical parallel mechanism model, and the optimal pivot-point position is determined. Passive motion and anatomical data were taken from in vitro experiments in nine lower limb specimens. For each of these, a spherical mechanism, including the tibiofibular and talocalcaneal segments connected by a spherical pair and by the calcaneofibular and tibiocalcaneal ligament links, was defined from the corresponding experimental kinematics and geometry. An iterative procedure was used to optimize the geometry of the model, able to predict original experimental motion. The results of the simulations showed a good replication of the original natural motion, despite the numerous model assumptions and simplifications, with mean differences between experiments and predictions smaller than 1.3 mm (average 0.33 mm) for the three joint position components and smaller than 0.7° (average 0.32°) for the two out-of-sagittal plane rotations, once plotted versus the full flexion arc. The relevant pivot-point position after model optimization was found within the tibial mortise, but not exactly in a central location. The present combined experimental and modelling analysis of passive motion at the human ankle joint shows that a one degree-of-freedom spherical mechanism predicts well what is observed in real joints, although its computational complexity is comparable to the standard hinge joint model.
Observation and analysis of high-speed human motion with frequent occlusion in a large area
NASA Astrophysics Data System (ADS)
Wang, Yuru; Liu, Jiafeng; Liu, Guojun; Tang, Xianglong; Liu, Peng
2009-12-01
The use of computer vision technology in collecting and analyzing statistics during sports matches or training sessions is expected to provide valuable information for tactics improvement. However, the measurements published in the literature so far are either unreliably documented to be used in training planning due to their limitations or unsuitable for studying high-speed motion in large area with frequent occlusions. A sports annotation system is introduced in this paper for tracking high-speed non-rigid human motion over a large playing area with the aid of motion camera, taking short track speed skating competitions as an example. The proposed system is composed of two sub-systems: precise camera motion compensation and accurate motion acquisition. In the video registration step, a distinctive invariant point feature detector (probability density grads detector) and a global parallax based matching points filter are used, to provide reliable and robust matching across a large range of affine distortion and illumination change. In the motion acquisition step, a two regions' relationship constrained joint color model and Markov chain Monte Carlo based joint particle filter are emphasized, by dividing the human body into two relative key regions. Several field tests are performed to assess measurement errors, including comparison to popular algorithms. With the help of the system presented, the system obtains position data on a 30 m × 60 m large rink with root-mean-square error better than 0.3975 m, velocity and acceleration data with absolute error better than 1.2579 m s-1 and 0.1494 m s-2, respectively.
Effects of Vibrotactile Feedback on Human Learning of Arm Motions
Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.
2015-01-01
Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644
Accurate estimation of human body orientation from RGB-D sensors.
Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao
2013-10-01
Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.
Analyzing the effects of human-aware motion planning on close-proximity human-robot collaboration.
Lasota, Przemyslaw A; Shah, Julie A
2015-02-01
The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human-robot interaction. We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human-robot team fluency and human worker satisfaction. Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human-robot collaboration.
Analysis of the Accuracy and Robustness of the Leap Motion Controller
Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis
2013-01-01
The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678
Analysis of the accuracy and robustness of the leap motion controller.
Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis
2013-05-14
The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction.
Dynamical simulation priors for human motion tracking.
Vondrak, Marek; Sigal, Leonid; Jenkins, Odest Chadwicke
2013-01-01
We propose a simulation-based dynamical motion prior for tracking human motion from video in presence of physical ground-person interactions. Most tracking approaches to date have focused on efficient inference algorithms and/or learning of prior kinematic motion models; however, few can explicitly account for the physical plausibility of recovered motion. Here, we aim to recover physically plausible motion of a single articulated human subject. Toward this end, we propose a full-body 3D physical simulation-based prior that explicitly incorporates a model of human dynamics into the Bayesian filtering framework. We consider the motion of the subject to be generated by a feedback “control loop” in which Newtonian physics approximates the rigid-body motion dynamics of the human and the environment through the application and integration of interaction forces, motor forces, and gravity. Interaction forces prevent physically impossible hypotheses, enable more appropriate reactions to the environment (e.g., ground contacts), and are produced from detected human-environment collisions. Motor forces actuate the body, ensure that proposed pose transitions are physically feasible, and are generated using a motion controller. For efficient inference in the resulting high-dimensional state space, we utilize an exemplar-based control strategy that reduces the effective search space of motor forces. As a result, we are able to recover physically plausible motion of human subjects from monocular and multiview video. We show, both quantitatively and qualitatively, that our approach performs favorably with respect to Bayesian filtering methods with standard motion priors.
Muscular activity and its relationship to biomechanics and human performance
NASA Technical Reports Server (NTRS)
Ariel, Gideon
1994-01-01
The purpose of this manuscript is to address the issue of muscular activity, human motion, fitness, and exercise. Human activity is reviewed from the historical perspective as well as from the basics of muscular contraction, nervous system controls, mechanics, and biomechanical considerations. In addition, attention has been given to some of the principles involved in developing muscular adaptations through strength development. Brief descriptions and findings from a few studies are included. These experiments were conducted in order to investigate muscular adaptation to various exercise regimens. Different theories of strength development were studied and correlated to daily human movements. All measurement tools used represent state of the art exercise equipment and movement analysis. The information presented here is only a small attempt to understand the effects of exercise and conditioning on Earth with the objective of leading to greater knowledge concerning human responses during spaceflight. What makes life from nonliving objects is movement which is generated and controlled by biochemical substances. In mammals. the controlled activators are skeletal muscles and this muscular action is an integral process composed of mechanical, chemical, and neurological processes resulting in voluntary and involuntary motions. The scope of this discussion is limited to voluntary motion.
NASA Astrophysics Data System (ADS)
Barki, Anum; Kendricks, Kimberly; Tuttle, Ronald F.; Bunker, David J.; Borel, Christoph C.
2013-05-01
This research highlights the results obtained from applying the method of inverse kinematics, using Groebner basis theory, to the human gait cycle to extract and identify lower extremity gait signatures. The increased threat from suicide bombers and the force protection issues of today have motivated a team at Air Force Institute of Technology (AFIT) to research pattern recognition in the human gait cycle. The purpose of this research is to identify gait signatures of human subjects and distinguish between subjects carrying a load to those subjects without a load. These signatures were investigated via a model of the lower extremities based on motion capture observations, in particular, foot placement and the joint angles for subjects affected by carrying extra load on the body. The human gait cycle was captured and analyzed using a developed toolkit consisting of an inverse kinematic motion model of the lower extremity and a graphical user interface. Hip, knee, and ankle angles were analyzed to identify gait angle variance and range of motion. Female subjects exhibited the most knee angle variance and produced a proportional correlation between knee flexion and load carriage.
Encodings of implied motion for animate and inanimate object categories in the two visual pathways.
Lu, Zhengang; Li, Xueting; Meng, Ming
2016-01-15
Previous research has proposed two separate pathways for visual processing: the dorsal pathway for "where" information vs. the ventral pathway for "what" information. Interestingly, the middle temporal cortex (MT) in the dorsal pathway is involved in representing implied motion from still pictures, suggesting an interaction between motion and object related processing. However, the relationship between how the brain encodes implied motion and how the brain encodes object/scene categories is unclear. To address this question, fMRI was used to measure activity along the two pathways corresponding to different animate and inanimate categories of still pictures with different levels of implied motion speed. In the visual areas of both pathways, activity induced by pictures of humans and animals was hardly modulated by the implied motion speed. By contrast, activity in these areas correlated with the implied motion speed for pictures of inanimate objects and scenes. The interaction between implied motion speed and stimuli category was significant, suggesting different encoding mechanisms of implied motion for animate-inanimate distinction. Further multivariate pattern analysis of activity in the dorsal pathway revealed significant effects of stimulus category that are comparable to the ventral pathway. Moreover, still pictures of inanimate objects/scenes with higher implied motion speed evoked activation patterns that were difficult to differentiate from those evoked by pictures of humans and animals, indicating a functional role of implied motion in the representation of object categories. These results provide novel evidence to support integrated encoding of motion and object categories, suggesting a rethink of the relationship between the two visual pathways. Copyright © 2015 Elsevier Inc. All rights reserved.
Automated video-based assessment of surgical skills for training and evaluation in medical schools.
Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Ploetz, Thomas; Clements, Mark A; Essa, Irfan
2016-09-01
Routine evaluation of basic surgical skills in medical schools requires considerable time and effort from supervising faculty. For each surgical trainee, a supervisor has to observe the trainees in person. Alternatively, supervisors may use training videos, which reduces some of the logistical overhead. All these approaches however are still incredibly time consuming and involve human bias. In this paper, we present an automated system for surgical skills assessment by analyzing video data of surgical activities. We compare different techniques for video-based surgical skill evaluation. We use techniques that capture the motion information at a coarser granularity using symbols or words, extract motion dynamics using textural patterns in a frame kernel matrix, and analyze fine-grained motion information using frequency analysis. We were successfully able to classify surgeons into different skill levels with high accuracy. Our results indicate that fine-grained analysis of motion dynamics via frequency analysis is most effective in capturing the skill relevant information in surgical videos. Our evaluations show that frequency features perform better than motion texture features, which in-turn perform better than symbol-/word-based features. Put succinctly, skill classification accuracy is positively correlated with motion granularity as demonstrated by our results on two challenging video datasets.
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
Jastorff, Jan; Orban, Guy A
2009-06-03
In a series of human functional magnetic resonance imaging experiments, we systematically manipulated point-light stimuli to identify the contributions of the various areas implicated in biological motion processing (for review, see Giese and Poggio, 2003). The first experiment consisted of a 2 x 2 factorial design with global shape and kinematics as factors. In two additional experiments, we investigated the contributions of local opponent motion, the complexity of the portrayed movement and a one-back task to the activation pattern. Experiment 1 revealed a clear separation between shape and motion processing, resulting in two branches of activation. A ventral region, extending from the lateral occipital sulcus to the posterior inferior temporal gyrus, showed a main effect of shape and its extension into the fusiform gyrus also an interaction. The dorsal region, including the posterior inferior temporal sulcus and the posterior superior temporal sulcus (pSTS), showed a main effect of kinematics together with an interaction. Region of interest analysis identified these interaction sites as the extrastriate and fusiform body areas (EBA and FBA). The local opponent motion cue yielded only little activation, limited to the ventral region (experiment 3). Our results suggest that the EBA and the FBA correspond to the initial stages in visual action analysis, in which the performed action is linked to the body of the actor. Moreover, experiment 2 indicates that the body areas are activated automatically even in the absence of a task, whereas other cortical areas like pSTS or frontal regions depend on the complexity of movements or task instructions for their activation.
Keller, Sune H; Sibomana, Merence; Olesen, Oline V; Svarer, Claus; Holm, Søren; Andersen, Flemming L; Højgaard, Liselotte
2012-03-01
Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Two scans with minor motion and 5 with major motion (as reported by the optical motion tracking system) were selected from (18)F-FDG scans acquired on a PET scanner. The motion was measured as the maximum displacement of the markers attached to the subject's head and was considered to be major if larger than 4 mm and minor if less than 2 mm. After allowing a 40- to 60-min uptake time after tracer injection, we acquired a 6-min transmission scan, followed by a 40-min emission list-mode scan. Each emission list-mode dataset was divided into 8 frames of 5 min. The reconstructed time-framed images were aligned to a selected reference frame using either EMT or the AIR (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. The results of the 3 QC methods were in agreement with one another and with a visual subjective inspection of the image data. Before MC, the QC method measures varied significantly in scans with major motion and displayed limited variations on scans with minor motion. The variation was significantly reduced and measures improved after MC with AIR, whereas EMT MC performed less well. The 3 presented QC methods produced similar results and are useful for evaluating tracer-independent external-tracking motion-correction methods for human brain scans.
The relationship between human field motion and preferred visible wavelengths.
Benedict, S C; Burge, J M
1990-01-01
The purpose of this study was to investigate the relationship between human field motion and preferred visible wavelengths. The study was based on the principle of resonancy from Rogers' science of unitary human beings; 201 subjects were tested using a modified version of Ference's human field motion test (HFMT). Two matrices of color were projected to provide an environment for the measurement of preferred visible wavelengths. There was no statistically significant relationship (r = 0.0387, p = 0.293) between scores on the human field motion test and preferred visible wavelengths as measured in nanometers. The Rogerian concept of accelerated human field rhythms being coordinate with higher frequency environment patterns was not supported in this study. Questions concerning the validity of the HFMT were expressed and were based upon the ambiguity of the terminology of the instrument and the lack of understanding of the concepts used to describe human field motion. Recommendations include the use of other methods to study Rogers' framework, and the development of other instrumentation to measure human field motion.
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Beutter, Brent R.; Stone, Leland S.
1997-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
Synchronized movement experience enhances peer cooperation in preschool children.
Rabinowitch, Tal-Chen; Meltzoff, Andrew N
2017-08-01
Cooperating with other people is a key achievement in child development and is essential for human culture. We examined whether we could induce 4-year-old children to increase their cooperation with an unfamiliar peer by providing the peers with synchronized motion experience prior to the tasks. Children were randomly assigned to independent treatment and control groups. The treatment of synchronous motion caused children to enhance their cooperation, as measured by the speed of joint task completion, compared with control groups that underwent asynchronous motion or no motion at all. Further analysis suggested that synchronization experience increased intentional communication between peer partners, resulting in increased coordination and cooperation. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
1998-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
Macro-motion detection using ultra-wideband impulse radar.
Xin Li; Dengyu Qiao; Ye Li
2014-01-01
Radar has the advantage of being able to detect hidden individuals, which can be used in homeland security, disaster rescue, and healthcare monitoring-related applications. Human macro-motion detection using ultra-wideband impulse radar is studied in this paper. First, a frequency domain analysis is carried out to show that the macro-motion yields a bandpass signal in slow-time. Second, the FTFW (fast-time frequency windowing), which has the advantage of avoiding the measuring range reduction, and the HLF (high-pass linear-phase filter), which can preserve the motion signal effectively, are proposed to preprocess the radar echo. Last, a threshold decision method, based on the energy detector structure, is presented.
Saving and Reproduction of Human Motion Data by Using Haptic Devices with Different Configurations
NASA Astrophysics Data System (ADS)
Tsunashima, Noboru; Yokokura, Yuki; Katsura, Seiichiro
Recently, there has been increased focus on “haptic recording” development of a motion-copying system is an efficient method for the realization of haptic recording. Haptic recording involves saving and reproduction of human motion data on the basis of haptic information. To increase the number of applications of the motion-copying system in various fields, it is necessary to reproduce human motion data by using haptic devices with different configurations. In this study, a method for the above-mentioned haptic recording is developed. In this method, human motion data are saved and reproduced on the basis of work space information, which is obtained by coordinate transformation of motor space information. The validity of the proposed method is demonstrated by experiments. With the proposed method, saving and reproduction of human motion data by using various devices is achieved. Furthermore, it is also possible to use haptic recording in various fields.
Baddoura, Ritta; Venture, Gentiane
2014-01-01
During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions.
Baddoura, Ritta; Venture, Gentiane
2014-01-01
During an unannounced encounter between two humans and a proactive humanoid (NAO, Aldebaran Robotics), we study the dependencies between the human partners' affective experience (measured via the answers to a questionnaire) particularly regarding feeling familiar and feeling frightened, and their arm and head motion [frequency and smoothness using Inertial Measurement Units (IMU)]. NAO starts and ends its interaction with its partners by non-verbally greeting them hello (bowing) and goodbye (moving its arm). The robot is invested with a real and useful task to perform: handing each participant an envelope containing a questionnaire they need to answer. NAO's behavior varies from one partner to the other (Smooth with X vs. Resisting with Y). The results show high positive correlations between feeling familiar while interacting with the robot and: the frequency and smoothness of the human arm movement when waving back goodbye, as well as the smoothness of the head during the whole encounter. Results also show a negative dependency between feeling frightened and the frequency of the human arm movement when waving back goodbye. The principal component analysis (PCA) suggests that, in regards to the various motion measures examined in this paper, the head smoothness and the goodbye gesture frequency are the most reliable measures when it comes to considering the familiar experienced by the participants. The PCA also points out the irrelevance of the goodbye motion frequency when investigating the participants' experience of fear in its relation to their motion characteristics. The results are discussed in light of the major findings of studies on body movements and postures accompanying specific emotions. PMID:24688466
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception of human locomotion. Experiment 1 shows that human newborns prefer a point-light walker display representing human locomotion as if on a treadmill over random motion. However, no preference for biological movement is observed in Experiment 2 when both biological and random motion displays are presented with translational displacement. Experiments 3 and 4 show that newborns exhibit preference for translated biological motion (Experiment 3) and random motion (Experiment 4) displays over the same configurations moving without translation. These findings reveal that human newborns have a preference for the translational component of movement independently of the presence of biological kinematics. The outcome suggests that translation constitutes the first step in development of visual preference for biological motion. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Global versus local adaptation in fly motion-sensitive neurons
Neri, Peter; Laughlin, Simon B
2005-01-01
Flies, like humans, experience a well-known consequence of adaptation to visual motion, the waterfall illusion. Direction-selective neurons in the fly lobula plate permit a detailed analysis of the mechanisms responsible for motion adaptation and their function. Most of these neurons are spatially non-opponent, they sum responses to motion in the preferred direction across their entire receptive field, and adaptation depresses responses by subtraction and by reducing contrast gain. When we adapted a small area of the receptive field to motion in its anti-preferred direction, we discovered that directional gain at unadapted regions was enhanced. This novel phenomenon shows that neuronal responses to the direction of stimulation in one area of the receptive field are dynamically adjusted to the history of stimulation both within and outside that area. PMID:16191636
Multidigit movement synergies of the human hand in an unconstrained haptic exploration task.
Thakur, Pramodsingh H; Bastian, Amy J; Hsiao, Steven S
2008-02-06
Although the human hand has a complex structure with many individual degrees of freedom, joint movements are correlated. Studies involving simple tasks (grasping) or skilled tasks (typing or finger spelling) have shown that a small number of combined joint motions (i.e., synergies) can account for most of the variance in observed hand postures. However, those paradigms evoked a limited set of hand postures and as such the reported correlation patterns of joint motions may be task-specific. Here, we used an unconstrained haptic exploration task to evoke a set of hand postures that is representative of most naturalistic postures during object manipulation. Principal component analysis on this set revealed that the first seven principal components capture >90% of the observed variance in hand postures. Further, we identified nine eigenvectors (or synergies) that are remarkably similar across multiple subjects and across manipulations of different sets of objects within a subject. We then determined that these synergies are used broadly by showing that they account for the changes in hand postures during other tasks. These include hand motions such as reach and grasp of objects that vary in width, curvature and angle, and skilled motions such as precision pinch. Our results demonstrate that the synergies reported here generalize across tasks, and suggest that they represent basic building blocks underlying natural human hand motions.
Huebsch, Nathaniel; Loskill, Peter; Mandegar, Mohammad A; Marks, Natalie C; Sheehan, Alice S; Ma, Zhen; Mathur, Anurag; Nguyen, Trieu N; Yoo, Jennie C; Judge, Luke M; Spencer, C Ian; Chukka, Anand C; Russell, Caitlin R; So, Po-Lin; Conklin, Bruce R; Healy, Kevin E
2015-05-01
Contractile motion is the simplest metric of cardiomyocyte health in vitro, but unbiased quantification is challenging. We describe a rapid automated method, requiring only standard video microscopy, to analyze the contractility of human-induced pluripotent stem cell-derived cardiomyocytes (iPS-CM). New algorithms for generating and filtering motion vectors combined with a newly developed isogenic iPSC line harboring genetically encoded calcium indicator, GCaMP6f, allow simultaneous user-independent measurement and analysis of the coupling between calcium flux and contractility. The relative performance of these algorithms, in terms of improving signal to noise, was tested. Applying these algorithms allowed analysis of contractility in iPS-CM cultured over multiple spatial scales from single cells to three-dimensional constructs. This open source software was validated with analysis of isoproterenol response in these cells, and can be applied in future studies comparing the drug responsiveness of iPS-CM cultured in different microenvironments in the context of tissue engineering.
Principal components of wrist circumduction from electromagnetic surgical tracking.
Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E
2017-02-01
An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.
A human factors analysis of EVA time requirements
NASA Technical Reports Server (NTRS)
Pate, D. W.
1996-01-01
Human Factors Engineering (HFE), also known as Ergonomics, is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. A human factors motion and time study was initiated with the goal of developing a database of EVA task times and a method of utilizing the database to predict how long an ExtraVehicular Activity (EVA) should take. Initial development relied on the EVA activities performed during the STS-61 mission (Hubble repair). The first step of the analysis was to become familiar with EVAs and with the previous studies and documents produced on EVAs. After reviewing these documents, an initial set of task primitives and task time modifiers was developed. Videotaped footage of STS-61 EVAs were analyzed using these primitives and task time modifiers. Data for two entire EVA missions and portions of several others, each with two EVA astronauts, was collected for analysis. Feedback from the analysis of the data will be used to further refine the primitives and task time modifiers used. Analysis of variance techniques for categorical data will be used to determine which factors may, individually or by interactions, effect the primitive times and how much of an effect they have.
Identification of human-generated forces on wheelchairs during total-body extensor thrusts.
Hong, Seong-Wook; Patrangenaru, Vlad; Singhose, William; Sprigle, Stephen
2006-10-01
Involuntary extensor thrust experienced by wheelchair users with neurological disorders may cause injuries via impact with the wheelchair, lead to the occupant sliding out of the seat, and also damage the wheelchair. The concept of a dynamic seat, which allows movement of a seat with respect to the wheelchair frame, has been suggested as a potential solution to provide greater freedom and safety. Knowledge of the human-generated motion and forces during unconstrained extensor thrust events is of great importance in developing more comfortable and effective dynamic seats. The objective of this study was to develop a method to identify human-generated motions and forces during extensor thrust events. This information can be used to design the triggering system for a dynamic seat. An experimental system was developed to automatically track the motions of the wheelchair user using a video camera and also measure the forces at the footrest. An inverse dynamic approach was employed along with a three-link human body model and the experimental data to predict the human-generated forces. Two kinds of experiments were performed: the first experiment validated the proposed model and the second experiment showed the effects of the extensor thrust speed, the footrest angle, and the seatback angle. The proposed method was tested using a sensitivity analysis, from which a performance index was deduced to help indicate the robust region of the force identification. A system to determine human-generated motions and forces during unconstrained extensor thrusts was developed. Through experiments and simulations, the effectiveness and reliability of the developed system was established.
Holewijn, Roderick M.; de Kleuver, Marinus; van der Veen, Albert J.; Emanuel, Kaj S.; Bisschop, Arno; Stadhouder, Agnita; van Royen, Barend J.
2017-01-01
Study Design: Biomechanical study. Objective: Recently, a posterior concave periapical distraction device for fusionless scoliosis correction was introduced. The goal of this study was to quantify the effect of the periapical distraction device on spinal range of motion (ROM) in comparison with traditional rigid pedicle screw-rod instrumentation. Methods: Using a spinal motion simulator, 6 human spines were loaded with 4 N m and 6 porcine spines with 2 N m to induce flexion-extension (FE), lateral bending (LB), and axial rotation (AR). ROM was measured in 3 conditions: untreated, periapical distraction device, and rigid pedicle screw-rod instrumentation. Results: The periapical distraction device caused a significant (P < .05) decrease in ROM of FE (human, −40.0% and porcine, −55.9%) and LB (human, −18.2% and porcine, −17.9%) as compared to the untreated spine, while ROM of AR remained unaffected. In comparison, rigid instrumentation caused a significantly (P < .05) larger decrease in ROM of FE (human, −80.9% and porcine, −94.0%), LB (human, −75.0% and porcine, −92.2%), and AR (human, −71.3% and porcine, −86.9%). Conclusions: Although no destructive forces were applied, no device failures were observed. Spinal ROM was significantly less constrained by the periapical distraction device compared to rigid pedicle screw-rod instrumentation. Therefore, provided that scoliosis correction is achieved, a more physiological spinal motion is expected after scoliosis correction with the posterior concave periapical distraction device. PMID:28811983
Thompson, Nathan E; Holowka, Nicholas B; O'Neill, Matthew C; Larson, Susan G
2014-08-01
During terrestrial locomotion, chimpanzees exhibit dorsiflexion of the midfoot between midstance and toe-off of stance phase, a phenomenon that has been called the "midtarsal break." This motion is generally absent during human bipedalism, and in chimpanzees is associated with more mobile foot joints than in humans. However, the contribution of individual foot joints to overall foot mobility in chimpanzees is poorly understood, particularly on the medial side of the foot. The talonavicular (TN) and calcaneocuboid (CC) joints have both been suggested to contribute significantly to midfoot mobility and to the midtarsal break in chimpanzees. To evaluate the relative magnitude of motion that can occur at these joints, we tracked skeletal motion of the hindfoot and midfoot during passive plantarflexion and dorsiflexion manipulations using cineradiography. The sagittal plane range of motion was 38 ± 10° at the TN joint and 14 ± 8° at the CC joint. This finding indicates that the TN joint is more mobile than the CC joint during ankle plantarflexion-dorsiflexion. We suggest that the larger range of motion at the TN joint during dorsiflexion is associated with a rotation (inversion-eversion) across the transverse tarsal joint, which may occur in addition to sagittal plane motion. © 2014 Wiley Periodicals, Inc.
Verification of RRA and CMC in OpenSim
NASA Astrophysics Data System (ADS)
Ieshiro, Yuma; Itoh, Toshiaki
2013-10-01
OpenSim is the free software that can handle various analysis and simulation of skeletal muscle dynamics with PC. This study treated RRA and CMC tools in OpenSim. It is remarkable that we can simulate human motion with respect to nerve signal of muscles using these tools. However, these tools seem to still in developmental stages. In order to verify applicability of these tools, we analyze bending and stretching motion data which are obtained from motion capture device using these tools. In this study, we checked the consistency between real muscle behavior and numerical results from these tools.
WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection.
Gong, Liangyi; Yang, Wu; Man, Dapeng; Dong, Guozhong; Yu, Miao; Lv, Jiguang
2015-12-21
With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR) and long-term averaged variance ratio (LVR). We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate.
WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection †
Gong, Liangyi; Yang, Wu; Man, Dapeng; Dong, Guozhong; Yu, Miao; Lv, Jiguang
2015-01-01
With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR) and long-term averaged variance ratio (LVR). We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate. PMID:26703612
Ultrasonic Methods for Human Motion Detection
2006-10-01
contacts. The active method utilizes continuous wave ultrasonic Doppler sonar . Human motions have unique Doppler signatures and their combination...The present article reports results of human motion investigations with help of CW ultrasonic Doppler sonar . Low-cost, low-power ultrasonic motion...have been developed for operation in air [10]. Benefits of using ultrasonic CW Doppler sonar included the low-cost, low-electric noise, small size
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
Review-Research on the physical training model of human body based on HQ.
Junjie, Liu
2016-11-01
Health quotient (HQ) is the newest health culture and concept in the 21st century, and the analysis of the human body sports model is not enough mature at present, what's more, the purpose of this paper is to study the integration of the two subjects the health quotient and the sport model. This paper draws the conclusion that physical training and education in colleges and universities can improve the health quotient, and it will make students possess a more healthy body and mind. Then through a new rigid body model of sports to simulate the human physical exercise. After that this paper has an in-depth study on the dynamic model of the human body movement on the basis of establishing the matrix and equation. The simulation results of the human body bicycle riding and pole throwing show that the human body joint movement simulation can be realized and it has a certain operability as well. By means of such simulated calculation, we can come to a conclusion that the movement of the ankle joint, knee joint and hip joint's motion law and real motion are basically the same. So it further verify the accuracy of the motion model, which lay the foundation of other research movement model, also, the study of the movement model is an important method in the study of human health in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Cappa, Frédéric; Rinaldi, Antonio P.
In this paper, we present model simulations of ground motions caused by CO 2 -injection-induced fault reactivation and analyze the results in terms of the potential for damage to ground surface structures and nuisance to the local human population. It is an integrated analysis from cause to consequence, including the whole chain of processes starting from earthquake inception in the subsurface, wave propagation toward the ground surface, and assessment of the consequences of ground vibration. For a small magnitude (M w =3) event at a hypocenter depth of about 1000m, we first used the simulated ground-motion wave train in anmore » inverse analysis to estimate source parameters (moment magnitude, rupture dimensions and stress drop), achieving good agreement and thereby verifying the modeling of the chain of processes from earthquake inception to ground vibration. We then analyzed the ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV) and frequency content, with comparison to U.S. Geological Survey's instrumental intensity scales for earthquakes and the U.S. Bureau of Mines' vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. Our results confirm the appropriateness of using PGV (rather than PGA) and frequency for the evaluation of potential ground-vibration effects on structures and humans from shallow injection-induced seismic events. For the considered synthetic M w =3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, but would certainly be felt by the local population.« less
The Unrealised Value of Human Motion--"Moving Back to Movement!"
ERIC Educational Resources Information Center
Dodd, Graham D.
2015-01-01
The unrealised and under-estimated value of human motion in human development, functioning and learning is the central cause for its devaluation in Australian society. This paper provides a greater insight into why human motion has high value and should be utilised more in advocacy and implementation in health and education, particularly school…
Selectivity to Translational Egomotion in Human Brain Motion Areas
Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare
2013-01-01
The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096
NASA Technical Reports Server (NTRS)
Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.
2009-01-01
In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.
Applying Simulated In Vivo Motions to Measure Human Knee and ACL Kinetics
Herfat, Safa T.; Boguszewski, Daniel V.; Shearn, Jason T.
2013-01-01
Patients frequently experience anterior cruciate ligament (ACL) injuries but current ACL reconstruction strategies do not restore the native biomechanics of the knee, which can contribute to the early onset of osteoarthritis in the long term. To design more effective treatments, investigators must first understand normal in vivo knee function for multiple activities of daily living (ADLs). While the 3D kinematics of the human knee have been measured for various ADLs, the 3D kinetics cannot be directly measured in vivo. Alternatively, the 3D kinetics of the knee and its structures can be measured in an animal model by simulating and applying subject-specific in vivo joint motions to a joint using robotics. However, a suitable biomechanical surrogate should first be established. This study was designed to apply a simulated human in vivo motion to human knees to measure the kinetics of the human knee and ACL. In pursuit of establishing a viable biomechanical surrogate, a simulated in vivo ovine motion was also applied to human knees to compare the loads produced by the human and ovine motions. The motions from the two species produced similar kinetics in the human knee and ACL. The only significant difference was the intact knee compression force produced by the two input motions. PMID:22227973
A Regulatory Switch Alters Chromosome Motions at the Metaphase to Anaphase Transition
Su, Kuan-Chung; Barry, Zachary; Schweizer, Nina; Maiato, Helder; Bathe, Mark; Cheeseman, Iain McPherson
2016-01-01
Summary To achieve chromosome segregation during mitosis, sister chromatids must undergo a dramatic change in their behavior to switch from balanced oscillations at the metaphase plate to directed poleward motion during anaphase. However, the factors that alter chromosome behavior at the metaphase-to-anaphase transition remain incompletely understood. Here, we perform time-lapse imaging to analyze anaphase chromosome dynamics in human cells. Using multiple directed biochemical, genetic, and physical perturbations, our results demonstrate that differences in the global phosphorylation states between metaphase and anaphase are the major determinant of chromosome motion dynamics. Indeed, causing a mitotic phosphorylation state to persist into anaphase produces dramatic metaphase-like oscillations. These induced oscillations depend on both kinetochore-derived and polar ejection forces that oppose poleward motion. Thus, our analysis of anaphase chromosome motion reveals that dephosphorylation of multiple mitotic substrates is required to suppress metaphase chromosome oscillatory motions and achieve directed poleward motion for successful chromosome segregation. PMID:27829144
The MPI Emotional Body Expressions Database for Narrative Scenarios
Volkova, Ekaterina; de la Rosa, Stephan; Bülthoff, Heinrich H.; Mohler, Betty
2014-01-01
Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth. PMID:25461382
Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People
NASA Astrophysics Data System (ADS)
Thomik, Andreas; Faisal, A. Aldo
2015-03-01
Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).
Spatial Map of Synthesized Criteria for the Redundancy Resolution of Human Arm Movements.
Li, Zhi; Milutinovic, Dejan; Rosen, Jacob
2015-11-01
The kinematic redundancy of the human arm enables the elbow position to rotate about the axis going through the shoulder and wrist, which results in infinite possible arm postures when the arm reaches to a target in a 3-D workspace. To infer the control strategy the human motor system uses to resolve redundancy in reaching movements, this paper compares five redundancy resolution criteria and evaluates their arm posture prediction performance using data on healthy human motion. Two synthesized criteria are developed to provide better real-time arm posture prediction than the five individual criteria. Of these two, the criterion synthesized using an exponential method predicts the arm posture more accurately than that using a least squares approach, and therefore is preferable for inferring the contributions of the individual criteria to motor control during reaching movements. As a methodology contribution, this paper proposes a framework to compare and evaluate redundancy resolution criteria for arm motion control. A cluster analysis which associates criterion contributions with regions of the workspace provides a guideline for designing a real-time motion control system applicable to upper-limb exoskeletons for stroke rehabilitation.
Self-motion facilitates echo-acoustic orientation in humans
Wallmeier, Ludwig; Wiegrebe, Lutz
2014-01-01
The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory–motor interactions, and on possible optimization strategies underlying echolocation in humans. PMID:26064556
Rutqvist, Jonny; Cappa, Frédéric; Rinaldi, Antonio P.; ...
2014-05-01
In this paper, we present model simulations of ground motions caused by CO 2 -injection-induced fault reactivation and analyze the results in terms of the potential for damage to ground surface structures and nuisance to the local human population. It is an integrated analysis from cause to consequence, including the whole chain of processes starting from earthquake inception in the subsurface, wave propagation toward the ground surface, and assessment of the consequences of ground vibration. For a small magnitude (M w =3) event at a hypocenter depth of about 1000m, we first used the simulated ground-motion wave train in anmore » inverse analysis to estimate source parameters (moment magnitude, rupture dimensions and stress drop), achieving good agreement and thereby verifying the modeling of the chain of processes from earthquake inception to ground vibration. We then analyzed the ground vibration results in terms of peak ground acceleration (PGA), peak ground velocity (PGV) and frequency content, with comparison to U.S. Geological Survey's instrumental intensity scales for earthquakes and the U.S. Bureau of Mines' vibration criteria for cosmetic damage to buildings, as well as human-perception vibration limits. Our results confirm the appropriateness of using PGV (rather than PGA) and frequency for the evaluation of potential ground-vibration effects on structures and humans from shallow injection-induced seismic events. For the considered synthetic M w =3 event, our analysis showed that the short duration, high frequency ground motion may not cause any significant damage to surface structures, but would certainly be felt by the local population.« less
Galashan, Daniela; Fehr, Thorsten; Kreiter, Andreas K; Herrmann, Manfred
2014-07-11
Initially, human area MT+ was considered a visual area solely processing motion information but further research has shown that it is also involved in various different cognitive operations, such as working memory tasks requiring motion-related information to be maintained or cognitive tasks with implied or expected motion.In the present fMRI study in humans, we focused on MT+ modulation during working memory maintenance using a dynamic shape-tracking working memory task with no motion-related working memory content. Working memory load was systematically varied using complex and simple stimulus material and parametrically increasing retention periods. Activation patterns for the difference between retention of complex and simple memorized stimuli were examined in order to preclude that the reported effects are caused by differences in retrieval. Conjunction analysis over all delay durations for the maintenance of complex versus simple stimuli demonstrated a wide-spread activation pattern. Percent signal change (PSC) in area MT+ revealed a pattern with higher values for the maintenance of complex shapes compared to the retention of a simple circle and with higher values for increasing delay durations. The present data extend previous knowledge by demonstrating that visual area MT+ presents a brain activity pattern usually found in brain regions that are actively involved in working memory maintenance.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
ERIC Educational Resources Information Center
McPherson, Moira N.; Marsh, Pamela K.; Montelpare, William J.; Van Barneveld, Christina; Zerpa, Carlos E.
2009-01-01
Background: Wizards of Motion is a program of curriculum delivery through which experts in Kinesiology introduce grade 7 students to applications of physics for human movement. The program is linked closely to Ministry of Education curriculum requirements but includes human movement applications and data analysis experiences. Purpose: The purpose…
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1990-01-01
Various papers on human and machine strategies in sensor fusion are presented. The general topics addressed include: active vision, measurement and analysis of visual motion, decision models for sensor fusion, implementation of sensor fusion algorithms, applying sensor fusion to image analysis, perceptual modules and their fusion, perceptual organization and object recognition, planning and the integration of high-level knowledge with perception, using prior knowledge and context in sensor fusion.
Micro-Doppler analysis of multiple frequency continuous wave radar signatures
NASA Astrophysics Data System (ADS)
Anderson, Michael G.; Rogers, Robert L.
2007-04-01
Micro-Doppler refers to Doppler scattering returns produced by non rigid-body motion. Micro-Doppler gives rise to many detailed radar image features in addition to those associated with bulk target motion. Targets of different classes (for example, humans, animals, and vehicles) produce micro-Doppler images that are often distinguishable even by nonexpert observers. Micro-Doppler features have great potential for use in automatic target classification algorithms. Although the potential benefit of using micro-Doppler in classification algorithms is high, relatively little experimental (non-synthetic) micro-Doppler data exists. Much of the existing experimental data comes from highly cooperative targets (human or vehicle targets directly approaching the radar). This research involved field data collection and analysis of micro-Doppler radar signatures from non-cooperative targets. The data was collected using a low cost Xband multiple frequency continuous wave (MFCW) radar with three transmit frequencies. The collected MFCW radar signatures contain data from humans, vehicles, and animals. The presented data includes micro-Doppler signatures previously unavailable in the literature such as crawling humans and various animal species. The animal micro-Doppler signatures include deer, dog, and goat datasets. This research focuses on the analysis of micro-Doppler from noncooperative targets approaching the radar at various angles, maneuvers, and postures.
The validation of a human force model to predict dynamic forces resulting from multi-joint motions
NASA Technical Reports Server (NTRS)
Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.
1992-01-01
The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.
Wijenayake, Udaya; Park, Soon-Yong
2017-01-01
Accurate tracking and modeling of internal and external respiratory motion in the thoracic and abdominal regions of a human body is a highly discussed topic in external beam radiotherapy treatment. Errors in target/normal tissue delineation and dose calculation and the increment of the healthy tissues being exposed to high radiation doses are some of the unsolicited problems caused due to inaccurate tracking of the respiratory motion. Many related works have been introduced for respiratory motion modeling, but a majority of them highly depend on radiography/fluoroscopy imaging, wearable markers or surgical node implanting techniques. We, in this article, propose a new respiratory motion tracking approach by exploiting the advantages of an RGB-D camera. First, we create a patient-specific respiratory motion model using principal component analysis (PCA) removing the spatial and temporal noise of the input depth data. Then, this model is utilized for real-time external respiratory motion measurement with high accuracy. Additionally, we introduce a marker-based depth frame registration technique to limit the measuring area into an anatomically consistent region that helps to handle the patient movements during the treatment. We achieved a 0.97 correlation comparing to a spirometer and 0.53 mm average error considering a laser line scanning result as the ground truth. As future work, we will use this accurate measurement of external respiratory motion to generate a correlated motion model that describes the movements of internal tumors. PMID:28792468
Biofidelic Human Activity Modeling and Simulation with Large Variability
2014-11-25
A systematic approach was developed for biofidelic human activity modeling and simulation by using body scan data and motion capture data to...replicate a human activity in 3D space. Since technologies for simultaneously capturing human motion and dynamic shapes are not yet ready for practical use, a...that can replicate a human activity in 3D space with the true shape and true motion of a human. Using this approach, a model library was built to
Hybrid Orientation Based Human Limbs Motion Tracking Method
Glonek, Grzegorz; Wojciechowski, Adam
2017-01-01
One of the key technologies that lays behind the human–machine interaction and human motion diagnosis is the limbs motion tracking. To make the limbs tracking efficient, it must be able to estimate a precise and unambiguous position of each tracked human joint and resulting body part pose. In recent years, body pose estimation became very popular and broadly available for home users because of easy access to cheap tracking devices. Their robustness can be improved by different tracking modes data fusion. The paper defines the novel approach—orientation based data fusion—instead of dominating in literature position based approach, for two classes of tracking devices: depth sensors (i.e., Microsoft Kinect) and inertial measurement units (IMU). The detailed analysis of their working characteristics allowed to elaborate a new method that let fuse more precisely limbs orientation data from both devices and compensates their imprecisions. The paper presents the series of performed experiments that verified the method’s accuracy. This novel approach allowed to outperform the precision of position-based joints tracking, the methods dominating in the literature, of up to 18%. PMID:29232832
Understanding movement control in infants through the analysis of limb intersegmental dynamics.
Schneider, K; Zernicke, R F; Ulrich, B D; Jensen, J L; Thelen, E
1990-12-01
One important component in the understanding of the control of limb movements is the way in which the central nervous system accounts for joint forces and torques that may be generated not only by muscle actions but by gravity and by passive reactions related to the movements of limb segments. In this study, we asked how the neuromotor system of young infants controls a range of active and passive forces to produce a stereotypic, nonintentional movement. We specifically analyzed limb intersegmental dynamics in spontaneous, cyclic leg movements (kicking) of varying intensity in supine 3-month-old human infants. Using inverse dynamics, we calculated the contributions of active (muscular) and passive (motion-dependent and gravitational) torque components at the hip, knee, and ankle joints from three-dimensional limb kinematics. To calculate joint torques, accurate estimates were needed of the limb's anthropometric parameters, which we determined using a model of the human body. Our analysis of limb intersegmental dynamics explicitly quantified the complex interplay of active and passive forces producing the simple, involuntary kicking movements commonly seen in 3-month-old infants. our results revealed that in nonvigorous kicks, hip joint reversal was the result of an extensor torque due to gravity, opposed by the combined flexor effect of the muscle torque and the total motion-dependent torque. The total motion-dependent torque increased as a hip flexor torque in more vigorous kicks; an extensor muscle torque was necessary to counteract the flexor influences of the total motion-dependent torque and, in the case of large ranges of motion, a flexor gravity torque as well. Thus, with changing passive torque influences due to motions of the linked segments, the muscle torques were adjusted to produce a net torque to reverse the kicking motion. As a consequence, despite considerable heterogeneity in the intensity, range of motion, coordination, and movement context of each kick, smooth trajectories resulted from the muscle torque, counteracting and complementing not only gravity but also the motion-dependent torques generated by movement of the linked segments.
Analyzing the Effects of Human-Aware Motion Planning on Close-Proximity Human–Robot Collaboration
Shah, Julie A.
2015-01-01
Objective: The objective of this work was to examine human response to motion-level robot adaptation to determine its effect on team fluency, human satisfaction, and perceived safety and comfort. Background: The evaluation of human response to adaptive robotic assistants has been limited, particularly in the realm of motion-level adaptation. The lack of true human-in-the-loop evaluation has made it impossible to determine whether such adaptation would lead to efficient and satisfying human–robot interaction. Method: We conducted an experiment in which participants worked with a robot to perform a collaborative task. Participants worked with an adaptive robot incorporating human-aware motion planning and with a baseline robot using shortest-path motions. Team fluency was evaluated through a set of quantitative metrics, and human satisfaction and perceived safety and comfort were evaluated through questionnaires. Results: When working with the adaptive robot, participants completed the task 5.57% faster, with 19.9% more concurrent motion, 2.96% less human idle time, 17.3% less robot idle time, and a 15.1% greater separation distance. Questionnaire responses indicated that participants felt safer and more comfortable when working with an adaptive robot and were more satisfied with it as a teammate than with the standard robot. Conclusion: People respond well to motion-level robot adaptation, and significant benefits can be achieved from its use in terms of both human–robot team fluency and human worker satisfaction. Application: Our conclusion supports the development of technologies that could be used to implement human-aware motion planning in collaborative robots and the use of this technique for close-proximity human–robot collaboration. PMID:25790568
An octahedral shear strain-based measure of SNR for 3D MR elastography
NASA Astrophysics Data System (ADS)
McGarry, M. D. J.; Van Houten, E. E. W.; Perriñez, P. R.; Pattison, A. J.; Weaver, J. B.; Paulsen, K. D.
2011-07-01
A signal-to-noise ratio (SNR) measure based on the octahedral shear strain (the maximum shear strain in any plane for a 3D state of strain) is presented for magnetic resonance elastography (MRE), where motion-based SNR measures are commonly used. The shear strain, γ, is directly related to the shear modulus, μ, through the definition of shear stress, τ = μγ. Therefore, noise in the strain is the important factor in determining the quality of motion data, rather than the noise in the motion. Motion and strain SNR measures were found to be correlated for MRE of gelatin phantoms and the human breast. Analysis of the stiffness distributions of phantoms reconstructed from the measured motion data revealed a threshold for both strain and motion SNR where MRE stiffness estimates match independent mechanical testing. MRE of the feline brain showed significantly less correlation between the two SNR measures. The strain SNR measure had a threshold above which the reconstructed stiffness values were consistent between cases, whereas the motion SNR measure did not provide a useful threshold, primarily due to rigid body motion effects.
Do rhesus monkeys (Macaca mulatta) perceive illusory motion?
Agrillo, Christian; Gori, Simone; Beran, Michael J
2015-07-01
During the last decade, visual illusions have been used repeatedly to understand similarities and differences in visual perception of human and non-human animals. However, nearly all studies have focused only on illusions not related to motion perception, and to date, it is unknown whether non-human primates perceive any kind of motion illusion. In the present study, we investigated whether rhesus monkeys (Macaca mulatta) perceived one of the most popular motion illusions in humans, the Rotating Snake illusion (RSI). To this purpose, we set up four experiments. In Experiment 1, subjects initially were trained to discriminate static versus dynamic arrays. Once reaching the learning criterion, they underwent probe trials in which we presented the RSI and a control stimulus identical in overall configuration with the exception that the order of the luminance sequence was changed in a way that no apparent motion is perceived by humans. The overall performance of monkeys indicated that they spontaneously classified RSI as a dynamic array. Subsequently, we tested adult humans in the same task with the aim of directly comparing the performance of human and non-human primates (Experiment 2). In Experiment 3, we found that monkeys can be successfully trained to discriminate between the RSI and a control stimulus. Experiment 4 showed that a simple change in luminance sequence in the two arrays could not explain the performance reported in Experiment 3. These results suggest that some rhesus monkeys display a human-like perception of this motion illusion, raising the possibility that the neurocognitive systems underlying motion perception may be similar between human and non-human primates.
Stabilizing skateboard speed-wobble with reflex delay.
Varszegi, Balazs; Takacs, Denes; Stepan, Gabor; Hogan, S John
2016-08-01
A simple mechanical model of the skateboard-skater system is analysed, in which the effect of human control is considered by means of a linear proportional-derivative (PD) controller with delay. The equations of motion of this non-holonomic system are neutral delay-differential equations. A linear stability analysis of the rectilinear motion is carried out analytically. It is shown how to vary the control gains with respect to the speed of the skateboard to stabilize the uniform motion. The critical reflex delay of the skater is determined as the function of the speed. Based on this analysis, we present an explanation for the linear instability of the skateboard-skater system at high speed. Moreover, the advantages of standing ahead of the centre of the board are demonstrated from the viewpoint of reflex delay and control gain sensitivity. © 2016 The Author(s).
Subtle In-Scanner Motion Biases Automated Measurement of Brain Anatomy From In Vivo MRI
Alexander-Bloch, Aaron; Clasen, Liv; Stockman, Michael; Ronan, Lisa; Lalonde, Francois; Giedd, Jay; Raznahan, Armin
2016-01-01
While the potential for small amounts of motion in functional magnetic resonance imaging (fMRI) scans to bias the results of functional neuroimaging studies is well appreciated, the impact of in-scanner motion on morphological analysis of structural MRI is relatively under-studied. Even among “good quality” structural scans, there may be systematic effects of motion on measures of brain morphometry. In the present study, the subjects’ tendency to move during fMRI scans, acquired in the same scanning sessions as their structural scans, yielded a reliable, continuous estimate of in-scanner motion. Using this approach within a sample of 127 children, adolescents, and young adults, significant relationships were found between this measure and estimates of cortical gray matter volume and mean curvature, as well as trend-level relationships with cortical thickness. Specifically, cortical volume and thickness decreased with greater motion, and mean curvature increased. These effects of subtle motion were anatomically heterogeneous, were present across different automated imaging pipelines, showed convergent validity with effects of frank motion assessed in a separate sample of 274 scans, and could be demonstrated in both pediatric and adult populations. Thus, using different motion assays in two large non-overlapping sets of structural MRI scans, convergent evidence showed that in-scanner motion—even at levels which do not manifest in visible motion artifact—can lead to systematic and regionally specific biases in anatomical estimation. These findings have special relevance to structural neuroimaging in developmental and clinical datasets, and inform ongoing efforts to optimize neuroanatomical analysis of existing and future structural MRI datasets in non-sedated humans. PMID:27004471
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
Huebsch, Nathaniel; Loskill, Peter; Mandegar, Mohammad A.; Marks, Natalie C.; Sheehan, Alice S.; Ma, Zhen; Mathur, Anurag; Nguyen, Trieu N.; Yoo, Jennie C.; Judge, Luke M.; Spencer, C. Ian; Chukka, Anand C.; Russell, Caitlin R.; So, Po-Lin
2015-01-01
Contractile motion is the simplest metric of cardiomyocyte health in vitro, but unbiased quantification is challenging. We describe a rapid automated method, requiring only standard video microscopy, to analyze the contractility of human-induced pluripotent stem cell-derived cardiomyocytes (iPS-CM). New algorithms for generating and filtering motion vectors combined with a newly developed isogenic iPSC line harboring genetically encoded calcium indicator, GCaMP6f, allow simultaneous user-independent measurement and analysis of the coupling between calcium flux and contractility. The relative performance of these algorithms, in terms of improving signal to noise, was tested. Applying these algorithms allowed analysis of contractility in iPS-CM cultured over multiple spatial scales from single cells to three-dimensional constructs. This open source software was validated with analysis of isoproterenol response in these cells, and can be applied in future studies comparing the drug responsiveness of iPS-CM cultured in different microenvironments in the context of tissue engineering. PMID:25333967
Evaluation of the leap motion controller as a new contact-free pointing device.
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2014-12-24
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.
Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2015-01-01
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043
Khan, Hassan Aqeel; Gore, Amit; Ashe, Jeff; Chakrabartty, Shantanu
2017-07-01
Physical activities are known to introduce motion artifacts in electrical impedance plethysmographic (EIP) sensors. Existing literature considers motion artifacts as a nuisance and generally discards the artifact containing portion of the sensor output. This paper examines the notion of exploiting motion artifacts for detecting the underlying physical activities which give rise to the artifacts in question. In particular, we investigate whether the artifact pattern associated with a physical activity is unique; and does it vary from one human-subject to another? Data was recorded from 19 adult human-subjects while conducting 5 distinct, artifact inducing, activities. A set of novel features based on the time-frequency signatures of the sensor outputs are then constructed. Our analysis demonstrates that these features enable high accuracy detection of the underlying physical activity. Using an SVM classifier we are able to differentiate between 5 distinct physical activities (coughing, reaching, walking, eating and rolling-on-bed) with an average accuracy of 85.46%. Classification is performed solely using features designed specifically to capture the time-frequency signatures of different physical activities. This enables us to measure both respiratory and motion information using only one type of sensor. This is in contrast to conventional approaches to physical activity monitoring; which rely on additional hardware such as accelerometers to capture activity information.
Cheng, Jeffrey Tao; Hamade, Mohamad; Merchant, Saumil N.; Rosowski, John J.; Harrington, Ellery; Furlong, Cosme
2013-01-01
Sound-induced motions of the surface of the tympanic membrane (TM) were measured using stroboscopic holography in cadaveric human temporal bones at frequencies between 0.2 and 18 kHz. The results are consistent with the combination of standing-wave-like modal motions and traveling-wave-like motions on the TM surface. The holographic techniques also quantified sound-induced displacements of the umbo of the malleus, as well as volume velocity of the TM. These measurements were combined with sound-pressure measurements near the TM to compute middle-ear input impedance and power reflectance at the TM. The results are generally consistent with other published data. A phenomenological model that behaved qualitatively like the data was used to quantify the relative magnitude and spatial frequencies of the modal and traveling-wave-like displacement components on the TM surface. This model suggests the modal magnitudes are generally larger than those of the putative traveling waves, and the computed wave speeds are much slower than wave speeds predicted by estimates of middle-ear delay. While the data are inconsistent with simple modal displacements of the TM, an alternate model based on the combination of modal motions in a lossy membrane can also explain these measurements without invoking traveling waves. PMID:23363110
NASA Astrophysics Data System (ADS)
Fauziah; Wibowo, E. P.; Madenda, S.; Hustinawati
2018-03-01
Capturing and recording motion in human is mostly done with the aim for sports, health, animation films, criminality, and robotic applications. In this study combined background subtraction and back propagation neural network. This purpose to produce, find similarity movement. The acquisition process using 8 MP resolution camera MP4 format, duration 48 seconds, 30frame/rate. video extracted produced 1444 pieces and results hand motion identification process. Phase of image processing performed is segmentation process, feature extraction, identification. Segmentation using bakground subtraction, extracted feature basically used to distinguish between one object to another object. Feature extraction performed by using motion based morfology analysis based on 7 invariant moment producing four different classes motion: no object, hand down, hand-to-side and hands-up. Identification process used to recognize of hand movement using seven inputs. Testing and training with a variety of parameters tested, it appears that architecture provides the highest accuracy in one hundred hidden neural network. The architecture is used propagate the input value of the system implementation process into the user interface. The result of the identification of the type of the human movement has been clone to produce the highest acuracy of 98.5447%. The training process is done to get the best results.
NASA Astrophysics Data System (ADS)
Reverey, Julia F.; Jeon, Jae-Hyung; Bao, Han; Leippe, Matthias; Metzler, Ralf; Selhuber-Unkel, Christine
2015-06-01
Acanthamoebae are free-living protists and human pathogens, whose cellular functions and pathogenicity strongly depend on the transport of intracellular vesicles and granules through the cytosol. Using high-speed live cell imaging in combination with single-particle tracking analysis, we show here that the motion of endogenous intracellular particles in the size range from a few hundred nanometers to several micrometers in Acanthamoeba castellanii is strongly superdiffusive and influenced by cell locomotion, cytoskeletal elements, and myosin II. We demonstrate that cell locomotion significantly contributes to intracellular particle motion, but is clearly not the only origin of superdiffusivity. By analyzing the contribution of microtubules, actin, and myosin II motors we show that myosin II is a major driving force of intracellular motion in A. castellanii. The cytoplasm of A. castellanii is supercrowded with intracellular vesicles and granules, such that significant intracellular motion can only be achieved by actively driven motion, while purely thermally driven diffusion is negligible.
Vestibular nuclei and cerebellum put visual gravitational motion in context.
Miller, William L; Maffei, Vincenzo; Bosco, Gianfranco; Iosa, Marco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco
2008-04-01
Animal survival in the forest, and human success on the sports field, often depend on the ability to seize a target on the fly. All bodies fall at the same rate in the gravitational field, but the corresponding retinal motion varies with apparent viewing distance. How then does the brain predict time-to-collision under gravity? A perspective context from natural or pictorial settings might afford accurate predictions of gravity's effects via the recovery of an environmental reference from the scene structure. We report that embedding motion in a pictorial scene facilitates interception of gravitational acceleration over unnatural acceleration, whereas a blank scene eliminates such bias. Functional magnetic resonance imaging (fMRI) revealed blood-oxygen-level-dependent correlates of these visual context effects on gravitational motion processing in the vestibular nuclei and posterior cerebellar vermis. Our results suggest an early stage of integration of high-level visual analysis with gravity-related motion information, which may represent the substrate for perceptual constancy of ubiquitous gravitational motion.
Chen, Yen-Yin; Chen, Weng-Pin; Chang, Hao-Hueng; Huang, Shih-Hao; Lin, Chun-Pin
2014-02-01
The aim of this study was to develop a novel dental implant abutment with a micro-motion mechanism that imitates the biomechanical behavior of the periodontal ligament, with the goal of increasing the long-term survival rate of dental implants. Computer-aided design software was used to design a novel dental implant abutment with an internal resilient component with a micro-motion capability. The feasibility of the novel system was investigated via finite element analysis. Then, a prototype of the novel dental implant abutment was fabricated, and the mechanical behavior was evaluated. The results of the mechanical tests and finite element analysis confirmed that the novel dental implant abutment possessed the anticipated micro-motion capability. Furthermore, the nonlinear force-displacement behavior apparent in this micro-motion mechanism imitated the movement of a human tooth. The slope of the force-displacement curve of the novel abutment was approximately 38.5 N/mm before the 0.02-mm displacement and approximately 430 N/mm after the 0.03-mm displacement. The novel dental implant abutment with a micro-motion mechanism actually imitated the biomechanical behavior of a natural tooth and provided resilient function, sealing, a non-separation mechanism, and ease-of-use. Copyright © 2013 Academy of Dental Materials. All rights reserved.
A Bayesian model of stereopsis depth and motion direction discrimination.
Read, J C A
2002-02-01
The extraction of stereoscopic depth from retinal disparity, and motion direction from two-frame kinematograms, requires the solution of a correspondence problem. In previous psychophysical work [Read and Eagle (2000) Vision Res 40: 3345-3358], we compared the performance of the human stereopsis and motion systems with correlated and anti-correlated stimuli. We found that, although the two systems performed similarly for narrow-band stimuli, broadband anti-correlated kinematograms produced a strong perception of reversed motion, whereas the stereograms appeared merely rivalrous. I now model these psychophysical data with a computational model of the correspondence problem based on the known properties of visual cortical cells. Noisy retinal images are filtered through a set of Fourier channels tuned to different spatial frequencies and orientations. Within each channel, a Bayesian analysis incorporating a prior preference for small disparities is used to assess the probability of each possible match. Finally, information from the different channels is combined to arrive at a judgement of stimulus disparity. Each model system--stereopsis and motion--has two free parameters: the amount of noise they are subject to, and the strength of their preference for small disparities. By adjusting these parameters independently for each system, qualitative matches are produced to psychophysical data, for both correlated and anti-correlated stimuli, across a range of spatial frequency and orientation bandwidths. The motion model is found to require much higher noise levels and a weaker preference for small disparities. This makes the motion model more tolerant of poor-quality reverse-direction false matches encountered with anti-correlated stimuli, matching the strong perception of reversed motion that humans experience with these stimuli. In contrast, the lower noise level and tighter prior preference used with the stereopsis model means that it performs close to chance with anti-correlated stimuli, in accordance with human psychophysics. Thus, the key features of the experimental data can be reproduced assuming that the motion system experiences more effective noise than the stereoscopy system and imposes a less stringent preference for small disparities.
1990-12-01
ears ( tinnitus ) and/or a reduced auditory acuity resulted from the dosing. These side effects have been shown to 29 occur in some subjects as a result of...examinations. 5. Complete blood count (CBC). 6. Blood biochemistry screen (Chem 18 including liver function tests). 7. Blood cholesterol and lipids . 8. Chest X...blood lipids and cholesterol, chest X-ray, urinalysis, visual acuity test, vestibular evaluation and liver function studies. Subjects will then take
Dynamic motion planning of 3D human locomotion using gradient-based optimization.
Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G
2008-06-01
Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.
Non-actual motion: phenomenological analysis and linguistic evidence.
Blomberg, Johan; Zlatev, Jordan
2015-09-01
Sentences with motion verbs describing static situations have been seen as evidence that language and cognition are geared toward dynamism and change (Talmy in Toward a cognitive semantics, MIT Press, Cambridge, 2000; Langacker in Concept, image, and symbol: the cognitive basis of grammar, Mouton de Gruyter, Berlin and New York, 1990). Different concepts have been used in the literature, e.g., fictive motion, subjective motion and abstract motion to denote this. Based on phenomenological analysis, we reinterpret such concepts as reflecting different motivations for the use of such constructions (Blomberg and Zlatev in Phenom Cogn Sci 13(3):395-418, 2014). To highlight the multifaceted character of the phenomenon, we propose the concept non-actual motion (NAM), which we argue is more compatible with the situated cognition approach than explanations such as "mental simulation" (e.g., Matlock in Studies in linguistic motivation, Mouton de Gruyter, Berlin, 2004). We investigate the expression of NAM by means of a picture-based elicitation task with speakers of Swedish, French and Thai. Pictures represented figures that either afford human motion or not (±afford); crossed with this, the figure extended either across the picture from a third-person perspective (3 pp) or from a first-person perspective (1 pp). All picture types elicited NAM-sentences with the combination [+afford, 1 pp] producing most NAM-sentences in all three languages. NAM-descriptions also conformed to language-specific patterns for the expression of actual motion. We conclude that NAM shows interaction between pre-linguistic motivations and language-specific conventions.
Scavenging energy from the motion of human lower limbs via a piezoelectric energy harvester
NASA Astrophysics Data System (ADS)
Fan, Kangqi; Yu, Bo; Zhu, Yingmin; Liu, Zhaohui; Wang, Liansong
2017-03-01
Scavenging energy from human motion through piezoelectric transduction has been considered as a feasible alternative to batteries for powering portable devices and realizing self-sustained devices. To date, most piezoelectric energy harvesters (PEHs) developed can only collect energy from the uni-directional mechanical vibration. This deficiency severely limits their applicability to human motion energy harvesting because the human motion involves diverse mechanical motions. In this paper, a novel PEH is proposed to harvest energy from the motion of human lower limbs. This PEH is composed of two piezoelectric cantilever beams, a sleeve and a ferromagnetic ball. The two beams are designed to sense the vibration along the tibial axis and conduct piezoelectric conversion. The ball senses the leg swing and actuates the two beams to vibrate via magnetic coupling. Theoretical and experimental studies indicate that the proposed PEH can scavenge energy from both the vibration and the swing. During each stride, the PEH can produce multiple peaks in voltage output, which is attributed to the superposition of different excitations. Moreover, the root-mean-square (RMS) voltage output of the PEH increases when the walking speed ranges from 2 to 8 km/h. In addition, the ultra-low frequencies of human motion are also up-converted by the proposed design.
Adaptive Animation of Human Motion for E-Learning Applications
ERIC Educational Resources Information Center
Li, Frederick W. B.; Lau, Rynson W. H.; Komura, Taku; Wang, Meng; Siu, Becky
2007-01-01
Human motion animation has been one of the major research topics in the field of computer graphics for decades. Techniques developed in this area help present human motions in various applications. This is crucial for enhancing the realism as well as promoting the user interest in the applications. To carry this merit to e-learning applications,…
Interactions between motion and form processing in the human visual system.
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by "motion-streaks" influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS.
Interactions between motion and form processing in the human visual system
Mather, George; Pavan, Andrea; Bellacosa Marotti, Rosilari; Campana, Gianluca; Casco, Clara
2013-01-01
The predominant view of motion and form processing in the human visual system assumes that these two attributes are handled by separate and independent modules. Motion processing involves filtering by direction-selective sensors, followed by integration to solve the aperture problem. Form processing involves filtering by orientation-selective and size-selective receptive fields, followed by integration to encode object shape. It has long been known that motion signals can influence form processing in the well-known Gestalt principle of common fate; texture elements which share a common motion property are grouped into a single contour or texture region. However, recent research in psychophysics and neuroscience indicates that the influence of form signals on motion processing is more extensive than previously thought. First, the salience and apparent direction of moving lines depends on how the local orientation and direction of motion combine to match the receptive field properties of motion-selective neurons. Second, orientation signals generated by “motion-streaks” influence motion processing; motion sensitivity, apparent direction and adaptation are affected by simultaneously present orientation signals. Third, form signals generated by human body shape influence biological motion processing, as revealed by studies using point-light motion stimuli. Thus, form-motion integration seems to occur at several different levels of cortical processing, from V1 to STS. PMID:23730286
Norman, Joseph; Hock, Howard; Schöner, Gregor
2014-07-01
It has long been thought (e.g., Cavanagh & Mather, 1989) that first-order motion-energy extraction via space-time comparator-type models (e.g., the elaborated Reichardt detector) is sufficient to account for human performance in the short-range motion paradigm (Braddick, 1974), including the perception of reverse-phi motion when the luminance polarity of the visual elements is inverted during successive frames. Human observers' ability to discriminate motion direction and use coherent motion information to segregate a region of a random cinematogram and determine its shape was tested; they performed better in the same-, as compared with the inverted-, polarity condition. Computational analyses of short-range motion perception based on the elaborated Reichardt motion energy detector (van Santen & Sperling, 1985) predict, incorrectly, that symmetrical results will be obtained for the same- and inverted-polarity conditions. In contrast, the counterchange detector (Hock, Schöner, & Gilroy, 2009) predicts an asymmetry quite similar to that of human observers in both motion direction and shape discrimination. The further advantage of counterchange, as compared with motion energy, detection for the perception of spatial shape- and depth-from-motion is discussed.
A Human Factors Analysis of EVA Time Requirements
NASA Technical Reports Server (NTRS)
Pate, Dennis W.
1997-01-01
Human Factors Engineering (HFE) is a discipline whose goal is to engineer a safer, more efficient interface between humans and machines. HFE makes use of a wide range of tools and techniques to fulfill this goal. One of these tools is known as motion and time study, a technique used to develop time standards for given tasks. During the summer of 1995, a human factors motion and time study was initiated with the goals of developing a database of EVA task times and developing a method of utilizing the database to predict how long an EVA should take. Initial development relied on the EVA activities performed during the STS-61 (Hubble) mission. The first step of the study was to become familiar with EVA's, the previous task-time studies, and documents produced on EVA's. After reviewing these documents, an initial set of task primitives and task-time modifiers was developed. Data was collected from videotaped footage of two entire STS-61 EVA missions and portions of several others, each with two EVA astronauts. Feedback from the analysis of the data was used to further refine the primitives and modifiers used. The project was continued during the summer of 1996, during which data on human errors was also collected and analyzed. Additional data from the STS-71 mission was also collected. Analysis of variance techniques for categorical data was used to determine which factors may affect the primitive times and how much of an effect they have. Probability distributions for the various task were also generated. Further analysis of the modifiers and interactions is planned.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2017-04-15
In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential. Mice and non-human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies. Here we investigated the structure and statistics of the vestibular input experienced by mice versus non-human primates during natural behaviours, and found important differences. Our data establish that the structure and statistics of natural signals in non-human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input. These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self-motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self-motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self-motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power-law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self-motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self-motion stimuli are fundamentally different in rodents and primates. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2017-01-01
Key points In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential.Mice and non‐human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies.Here we investigated the structure and statistics of the vestibular input experienced by mice versus non‐human primates during natural behaviours, and found important differences.Our data establish that the structure and statistics of natural signals in non‐human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input.These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. Abstract It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self‐motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self‐motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self‐motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power‐law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self‐motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self‐motion stimuli are fundamentally different in rodents and primates. PMID:28083981
[Comparative analysis of light sensitivity, depth and motion perception in animals and humans].
Schaeffel, F
2017-11-01
This study examined how humans perform regarding light sensitivity, depth perception and motion vision in comparison to various animals. The parameters that limit the performance of the visual system for these different functions were examined. This study was based on literature studies (search in PubMed) and own results. Light sensitivity is limited by the brightness of the retinal image, which in turn is determined by the f‑number of the eye. Furthermore, it is limited by photon noise, thermal decay of rhodopsin, noise in the phototransduction cascade and neuronal processing. In invertebrates, impressive optical tricks have been developed to increase the number of photons reaching the photoreceptors. Furthermore, the spontaneous decay of the photopigment is lower in invertebrates at the cost of higher energy consumption. For depth perception at close range, stereopsis is the most precise but is available only to a few vertebrates. In contrast, motion parallax is used by many species including vertebrates as well as invertebrates. In a few cases accommodation is used for depth measurements or chromatic aberration. In motion vision the temporal resolution of the eye is most important. The ficker fusion frequency correlates in vertebrates with metabolic turnover and body temperature but also has very high values in insects. Apart from that the flicker fusion frequency generally declines with increasing body weight. Compared to animals the performance of the visual system in humans is among the best regarding light sensitivity, is the best regarding depth resolution and in the middle range regarding motion resolution.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Motion data classification on the basis of dynamic time warping with a cloud point distance measure
NASA Astrophysics Data System (ADS)
Switonski, Adam; Josinski, Henryk; Zghidi, Hafedh; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of classification of model free motion data. The nearest neighbors classifier which is based on comparison performed by Dynamic Time Warping transform with cloud point distance measure is proposed. The classification utilizes both specific gait features reflected by a movements of subsequent skeleton joints and anthropometric data. To validate proposed approach human gait identification challenge problem is taken into consideration. The motion capture database containing data of 30 different humans collected in Human Motion Laboratory of Polish-Japanese Academy of Information Technology is used. The achieved results are satisfactory, the obtained accuracy of human recognition exceeds 90%. What is more, the applied cloud point distance measure does not depend on calibration process of motion capture system which results in reliable validation.
Rotation is the primary motion of paired human epidermal keratinocytes.
Tate, Sota; Imai, Matome; Matsushita, Natsuki; Nishimura, Emi K; Higashiyama, Shigeki; Nanba, Daisuke
2015-09-01
Collective motion of keratinocytes is involved in morphogenesis, homeostasis, and wound healing of the epidermis. Yet how the collective motion of keratinocytes emerges from the behavior of individual cells is still largely unknown. The aim of this study was to find the cellular behavior that links single and collective motion of keratinocytes. We investigated the behavior of two-cell colonies of HaCaT keratinocytes by a combination of time-lapse imaging and image processing. The two-cell colonies of HaCaT cells were formed as a contacted pair of keratinocyte clones. Image analysis and cell culture experiments revealed that the rotational speed of two-cell colonies was positively associated with their proliferative capacity. α6 integrin was required for the rotational motion of two-cell keratinocyte colonies. We also confirmed that two-cell colonies of keratinocytes predominantly exhibited the rotational, but not translational, motion, two modes of motion in a contact pair of rotating objects. The rotational motion is the primary motion of two-cell keratinocyte colonies and its speed is positively associated with their proliferative capacity. This study suggests that the assembly of rotating keratinocytes generates the collective motion of proliferative keratinocytes during morphogenesis and wound healing of the epidermis. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Phase-based motion magnification video for monitoring of vital signals using the Hermite transform
NASA Astrophysics Data System (ADS)
Brieva, Jorge; Moya-Albor, Ernesto
2017-11-01
In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.
The KIT Motion-Language Dataset.
Plappert, Matthias; Mandery, Christian; Asfour, Tamim
2016-12-01
Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, although there have been years of research in this area, no standardized and openly available data set exists to support the development and evaluation of such systems. We, therefore, propose the Karlsruhe Institute of Technology (KIT) Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our data set using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our data set or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting data set, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our data set an excellent choice that enables more transparent and comparable research in this important area.
Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo
2014-05-01
Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.
A Mobile Motion Analysis System Using Intertial Sensors for Analysis of Lower Limb Prosthetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, John Kyle P; Ericson, Milton Nance; Farquhar, Ethan
Soldiers returning from the global war on terror requiring lower leg prosthetics generally have different concerns and requirements than the typical lower leg amputee. These subjects are usually young, wish to remain active and often desire to return to active military duty. As such, they demand higher performance from their prosthetics, but are at risk for chronic injury and joint conditions in their unaffected limb. Motion analysis is a valuable tool in assessing the performance of new and existing prosthetic technologies as well as the methods in fitting these devices to both maximize performance and minimize risk of injury formore » the individual soldier. We are developing a mobile, low-cost motion analysis system using inertial measurement units (IMUs) and two custom force sensors that detect ground reaction forces and moments on both the unaffected limb and prosthesis. IMUs were tested on a robot programmed to simulate human gait motion. An algorithm which uses a kinematic model of the robot and an extended Kalman filter (EKF) was used to convert the rates and accelerations from the gyro and accelerometer into joint angles. Compared to encoder data from the robot, which was considered the ground truth in this experiment, the inertial measurement system had a RMSE of <1.0 degree. Collecting kinematic and kinetic data without the restrictions and expense of a motion analysis lab could help researchers, designers and prosthetists advance prosthesis technology and customize devices for individuals. Ultimately, these improvements will result in better prosthetic performance for the military population.« less
A computer analysis of reflex eyelid motion in normal subjects and in facial neuropathy.
Somia, N N; Rash, G S; Epstein, E E; Wachowiak, M; Sundine, M J; Stremel, R W; Barker, J H; Gossman, D
2000-12-01
To demonstrate how computerized eyelid motion analysis can quantify the human reflex blink. Seventeen normal subjects and 10 patients with unilateral facial nerve paralysis were analyzed. Eyelid closure is currently evaluated by systems primarily designed to assess lower/midfacial movements. The methods are subjective, difficult to reproduce, and measure only volitional closure. Reflex closure is responsible for eye hydration, and its evaluation demands dynamic analysis. A 60Hz video camera incorporated into a helmet was used to analyze blinking. Reflective markers on the forehead and eyelids allowed for the dynamic measurement of the reflex blink. Eyelid displacement, velocity and acceleration were calculated. The degree of synchrony between bilateral blinks was also determined. This study demonstrates that video motion analysis can describe normal and altered eyelid motions in a quantifiable manner. To our knowledge, this is the first study to measure dynamic reflex blinks. Eyelid closure may now be evaluated in kinematic terms. This technique could increase understanding of eyelid motion and permit more accurate evaluation of eyelid function. Dynamic eyelid evaluation has immediate applications in the treatment of facial palsy affecting the reflex blink. Relevance No method has been developed that objectively quantifies dynamic eyelid closure. Methods currently in use evaluate only volitional eyelid closure, and are based on direct and indirect observer assessments. These methods are subjective and are incapable of analyzing dynamic eyelid movements, which are critical to maintenance of corneal hydration and comfort. A system that quantifies eyelid kinematics can provide a functional analysis of blink disorders and an objective evaluation of their treatment(s).
Pysz, Marybeth A.; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K.
2015-01-01
Purpose To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. Materials and methods The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. Results MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Conclusion Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts. PMID:22535383
Pysz, Marybeth A; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K
2012-09-01
To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts.
Co-development of manner and path concepts in language, action, and eye-gaze behavior.
Lohan, Katrin S; Griffiths, Sascha S; Sciutti, Alessandra; Partmann, Tim C; Rohlfing, Katharina J
2014-07-01
In order for artificial intelligent systems to interact naturally with human users, they need to be able to learn from human instructions when actions should be imitated. Human tutoring will typically consist of action demonstrations accompanied by speech. In the following, the characteristics of human tutoring during action demonstration will be examined. A special focus will be put on the distinction between two kinds of motion events: path-oriented actions and manner-oriented actions. Such a distinction is inspired by the literature pertaining to cognitive linguistics, which indicates that the human conceptual system can distinguish these two distinct types of motion. These two kinds of actions are described in language by more path-oriented or more manner-oriented utterances. In path-oriented utterances, the source, trajectory, or goal is emphasized, whereas in manner-oriented utterances the medium, velocity, or means of motion are highlighted. We examined a video corpus of adult-child interactions comprised of three age groups of children-pre-lexical, early lexical, and lexical-and two different tasks, one emphasizing manner more strongly and one emphasizing path more strongly. We analyzed the language and motion of the caregiver and the gazing behavior of the child to highlight the differences between the tutoring and the acquisition of the manner and path concepts. The results suggest that age is an important factor in the development of these action categories. The analysis of this corpus has also been exploited to develop an intelligent robotic behavior-the tutoring spotter system-able to emulate children's behaviors in a tutoring situation, with the aim of evoking in human subjects a natural and effective behavior in teaching to a robot. The findings related to the development of manner and path concepts have been used to implement new effective feedback strategies in the tutoring spotter system, which should provide improvements in human-robot interaction. Copyright © 2014 Cognitive Science Society, Inc.
Elasticity of the living abdominal wall in laparoscopic surgery.
Song, Chengli; Alijani, Afshin; Frank, Tim; Hanna, George; Cuschieri, Alfred
2006-01-01
Laparoscopic surgery requires inflation of the abdominal cavity and this offers a unique opportunity to measure the mechanical properties of the living abdominal wall. We used a motion analysis system to study the abdominal wall motion of 18 patients undergoing laparoscopic surgery, and found that the mean Young's modulus was 27.7+/-4.5 and 21.0+/-3.7 kPa for male and female, respectively. During inflation, the abdominal wall changed from a cylinder to a dome shape. The average expansion in the abdominal wall surface was 20%, and a working space of 1.27 x 10(-3)m(3) was created by expansion, reshaping of the abdominal wall and diaphragmatic movement. For the first time, the elasticity of human abdominal wall was obtained from the patients undergoing laparoscopic surgery, and a 3D simulation model of human abdominal wall has been developed to analyse the motion pattern in laparoscopic surgery. Based on this study, a mechanical abdominal wall lift and a surgical simulator for safe/ergonomic port placements are under development.
Implementation of a Smart Phone for Motion Analysis.
Yodpijit, Nantakrit; Songwongamarit, Chalida; Tavichaiyuth, Nicha
2015-01-01
In todays information-rich environment, one of the most popular devices is a smartphone. Research has shown significant growth in the use of smartphones and apps all over the world. Accelerometer within smartphone is a motion sensor that can be used to detect human movements. Compared to other major vital signs, gait characteristics represent general health status, and can be determined using smartphones. The objective of the current study is to design and develop the alternative technology that can potentially predict health status and reduce healthcare cost. This study uses a smartphone as a wireless accelerometer for quantifying human motion characteristics from four steps of the system design and development (data acquisition operation, feature extraction algorithm, classifier design, and decision making strategy). Findings indicate that it is possible to extract features from a smartphones accelerometer using a peak detection algorithm. Gait characteristics obtain from the peak detection algorithm include stride time, stance time, swing time and cadence. Applications and limitations of this study are also discussed.
Seismic waveform classification using deep learning
NASA Astrophysics Data System (ADS)
Kong, Q.; Allen, R. M.
2017-12-01
MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.
A Miniature Electromechanical Generator Design Utilizing Human Motion
2010-09-01
Inductance Operating Range In the previous chapter, it was mentioned that the EMF induced from the generator was related to a time-changing magnetic...ELECTROMECHANICAL GENERATOR DESIGN UTILIZING HUMAN MOTION by Nicholas G. Hoffman September 2010 Thesis Co-Advisors: Alexander L. Julian...AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE A Miniature Electromechanical Generator Design Utilizing Human Motion 5. FUNDING NUMBERS
NASA Technical Reports Server (NTRS)
Lee, A. T.; Bussolari, S. R.
1986-01-01
The effect of motion platform systems on pilot behavior is considered with emphasis placed on civil aviation applications. A dynamic model for human spatial orientation based on the physiological structure and function of the human vestibular system is presented. Motion platform alternatives were evaluated on the basis of the following motion platform conditions: motion with six degrees-of-freedom required for Phase II simulators and two limited motion conditions. Consideration was given to engine flameout, airwork, and approach and landing scenarios.
NASA Astrophysics Data System (ADS)
Deglint, Jason; Chung, Audrey G.; Chwyl, Brendan; Amelard, Robert; Kazemzadeh, Farnoud; Wang, Xiao Yu; Clausi, David A.; Wong, Alexander
2016-03-01
Traditional photoplethysmographic imaging (PPGI) systems use the red, green, and blue (RGB) broadband measurements of a consumer digital camera to remotely estimate a patients heart rate; however, these broadband RGB signals are often corrupted by ambient noise, making the extraction of subtle fluctuations indicative of heart rate difficult. Therefore, the use of narrow-band spectral measurements can significantly improve the accuracy. We propose a novel digital spectral demultiplexing (DSD) method to infer narrow-band spectral information from acquired broadband RGB measurements in order to estimate heart rate via the computation of motion- compensated skin erythema fluctuation. Using high-resolution video recordings of human participants, multiple measurement locations are automatically identified on the cheeks of an individual, and motion-compensated broadband reflectance measurements are acquired at each measurement location over time via measurement location tracking. The motion-compensated broadband reflectance measurements are spectrally demultiplexed using a non-linear inverse model based on the spectral sensitivity of the camera's detector. A PPG signal is then computed from the demultiplexed narrow-band spectral information via skin erythema fluctuation analysis, with improved signal-to-noise ratio allowing for reliable remote heart rate measurements. To assess the effectiveness of the proposed system, a set of experiments involving human motion in a front-facing position were performed under ambient lighting conditions. Experimental results indicate that the proposed system achieves robust and accurate heart rate measurements and can provide additional information about the participant beyond the capabilities of traditional PPGI methods.
Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach
NASA Technical Reports Server (NTRS)
Henderson, Steve
2005-01-01
Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.
Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning
Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka
2012-01-01
Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849
Methodological aspects of EEG and body dynamics measurements during motion
Reis, Pedro M. R.; Hebenstreit, Felix; Gabsteiger, Florian; von Tscharner, Vinzenz; Lochmann, Matthias
2014-01-01
EEG involves the recording, analysis, and interpretation of voltages recorded on the human scalp which originate from brain gray matter. EEG is one of the most popular methods of studying and understanding the processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements that are performed in response to the environment. However, there are methodological difficulties which can occur when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions on how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics, and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determinating real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks. PMID:24715858
Human silhouette matching based on moment invariants
NASA Astrophysics Data System (ADS)
Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi
2005-07-01
This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.
Correspondences between What Infants See and Know about Causal and Self-Propelled Motion
ERIC Educational Resources Information Center
Cicchino, Jessica B.; Aslin, Richard N.; Rakison, David H.
2011-01-01
The associative learning account of how infants identify human motion rests on the assumption that this knowledge is derived from statistical regularities seen in the world. Yet, no catalog exists of what visual input infants receive of human motion, and of causal and self-propelled motion in particular. In this manuscript, we demonstrate that the…
Motion-oriented 3D analysis of body measurements
NASA Astrophysics Data System (ADS)
Loercher, C.; Morlock, S.; Schenk, A.
2017-10-01
The aim of this project is to develop an ergonomically based and motion-oriented size system. New concepts are required in order to be able to deal competently with complex requirements of function-oriented workwear and personal protective equipment (PPE). Body dimensions change through movement, which are basis for motion optimized clothing development. This affects fit and ergonomic comfort. The situation has to be fundamentally researched in order to derive well-founded anthropometric body data, taking into account kinematic requirements of humans and to define functional dimensions for clothing industry. Research focus shall be on ergonomic design of workwear and PPE. There are huge differences in body forms, proportions and muscle manifestations between genders. An improved basic knowledge can be provided as a result, supporting development as well as sales of motion-oriented clothing with perfect fit for garment manufacturers.
Annual solar motion and spy satellites
NASA Astrophysics Data System (ADS)
Jensen, Margaret; Larson, S. L.
2014-01-01
A topic often taught in introductory astronomy courses is the changing position of the Sun in the sky as a function of time of day and season. The relevance and importance of this motion is explained in the context of seasons and the impact it has on human activities such as agriculture. The geometry of the observed motion in the sky is usually reduced to graphical representations and visualizations that can be difficult to render and grasp. Sometimes students are asked to observe the Sun’s changing motion and record their data, but this is a long-term project requiring several months to complete. This poster outlines an activity for introductory astronomy students that takes a modern approach to this topic, namely determining the Sun’s location in the sky on a given date through the analysis of satellite photography of the Earth.
A low cost real-time motion tracking approach using webcam technology.
Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh
2015-02-05
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.
A low cost real-time motion tracking approach using webcam technology
Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh
2014-01-01
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306
Hybrid testing of lumbar CHARITE discs versus fusions.
Panjabi, Manohar; Malcolmson, George; Teng, Edward; Tominaga, Yasuhiro; Henderson, Gweneth; Serhan, Hassan
2007-04-20
An in vitro human cadaveric biomechanical study. To quantify effects on operated and other levels, including adjacent levels, due to CHARITE disc implantations versus simulated fusions, using follower load and the new hybrid test method in flexion-extension and bilateral torsion. Spinal fusion has been associated with long-term accelerated degeneration at adjacent levels. As opposed to the fusion, artificial discs are designed to preserve motion and diminish the adjacent-level effects. Five fresh human cadaveric lumbar specimens (T12-S1) underwent multidirectional testing in flexion-extension and bilateral torsion with 400 N follower load. Intact specimen total ranges of motion were determined with +/-10 Nm unconstrained pure moments. The intact range of motion was used as input for the hybrid tests of 5 constructs: 1) CHARITE disc at L5-S1; 2) fusion at L5-S1; 3) CHARITE discs at L4-L5 and L5-S1; 4) CHARITE disc at L4-L5 and fusion at L5-S1; and 5) 2-level fusion at L4-L5-S1. Using repeated-measures single factor analysis of variance and Bonferroni statistical tests (P < 0.05), intervertebral motion redistribution of each construct was compared with the intact. In flexion-extension, 1-level CHARITE disc preserved motion at the operated and other levels, while 2-level CHARITE showed some amount of other-level effects. In contrast, 1- and 2-level fusions increased other-level motions (average, 21.0% and 61.9%, respectively). In torsion, both 1- and 2-level discs preserved motions at all levels. The 2-level simulated fusion increased motions at proximal levels (22.9%), while the 1-level fusion produced no significant changes. In general, CHARITE discs preserved operated- and other-level motions. Fusion simulations affected motion redistribution at other levels, including adjacent levels.
Full-wave and half-wave rectification in second-order motion perception
NASA Technical Reports Server (NTRS)
Solomon, J. A.; Sperling, G.
1994-01-01
Microbalanced stimuli are dynamic displays which do not stimulate motion mechanisms that apply standard (Fourier-energy or autocorrelational) motion analysis directly to the visual signal. In order to extract motion information from microbalanced stimuli, Chubb and Sperling [(1988) Journal of the Optical Society of America, 5, 1986-2006] proposed that the human visual system performs a rectifying transformation on the visual signal prior to standard motion analysis. The current research employs two novel types of microbalanced stimuli: half-wave stimuli preserve motion information following half-wave rectification (with a threshold) but lose motion information following full-wave rectification; full-wave stimuli preserve motion information following full-wave rectification but lose motion information following half-wave rectification. Additionally, Fourier stimuli, ordinary square-wave gratings, were used to stimulate standard motion mechanisms. Psychometric functions (direction discrimination vs stimulus contrast) were obtained for each type of stimulus when presented alone, and when masked by each of the other stimuli (presented as moving masks and also as nonmoving, counterphase-flickering masks). RESULTS: given sufficient contrast, all three types of stimulus convey motion. However, only one-third of the population can perceive the motion of the half-wave stimulus. Observers are able to process the motion information contained in the Fourier stimulus slightly more efficiently than the information in the full-wave stimulus but are much less efficient in processing half-wave motion information. Moving masks are more effective than counterphase masks at hampering direction discrimination, indicating that some of the masking effect is interference between motion mechanisms, and some occurs at earlier stages. When either full-wave and Fourier or half-wave and Fourier gratings are presented simultaneously, there is a wide range of relative contrasts within which the motion directions of both gratings are easily determinable. Conversely, when half-wave and full-wave gratings are combined, the direction of only one of these gratings can be determined with high accuracy. CONCLUSIONS: the results indicate that three motion computations are carried out, any two in parallel: one standard ("first order") and two non-Fourier ("second-order") computations that employ full-wave and half-wave rectification.
EMG and EPP-integrated human-machine interface between the paralyzed and rehabilitation exoskeleton.
Yin, Yue H; Fan, Yuan J; Xu, Li D
2012-07-01
Although a lower extremity exoskeleton shows great prospect in the rehabilitation of the lower limb, it has not yet been widely applied to the clinical rehabilitation of the paralyzed. This is partly caused by insufficient information interactions between the paralyzed and existing exoskeleton that cannot meet the requirements of harmonious control. In this research, a bidirectional human-machine interface including a neurofuzzy controller and an extended physiological proprioception (EPP) feedback system is developed by imitating the biological closed-loop control system of human body. The neurofuzzy controller is built to decode human motion in advance by the fusion of the fuzzy electromyographic signals reflecting human motion intention and the precise proprioception providing joint angular feedback information. It transmits control information from human to exoskeleton, while the EPP feedback system based on haptic stimuli transmits motion information of the exoskeleton back to the human. Joint angle and torque information are transmitted in the form of air pressure to the human body. The real-time bidirectional human-machine interface can help a patient with lower limb paralysis to control the exoskeleton with his/her healthy side and simultaneously perceive motion on the paralyzed side by EPP. The interface rebuilds a closed-loop motion control system for paralyzed patients and realizes harmonious control of the human-machine system.
Gesture Recognition Based on the Probability Distribution of Arm Trajectories
NASA Astrophysics Data System (ADS)
Wan, Khairunizam; Sawada, Hideyuki
The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.
Sensing human hand motions for controlling dexterous robots
NASA Technical Reports Server (NTRS)
Marcus, Beth A.; Churchill, Philip J.; Little, Arthur D.
1988-01-01
The Dexterous Hand Master (DHM) system is designed to control dexterous robot hands such as the UTAH/MIT and Stanford/JPL hands. It is the first commercially available device which makes it possible to accurately and confortably track the complex motion of the human finger joints. The DHM is adaptable to a wide variety of human hand sizes and shapes, throughout their full range of motion.
Microgravity Investigation of Crew Reactions in 0-G (MICRO-G)
NASA Technical Reports Server (NTRS)
Newman, Dava; Coleman, Charles; Metaxas, Dimitri
2004-01-01
There is a need for a human factors, technology-based bioastronautics research effort to develop an integrated system that reduces risk and provides scientific knowledge of astronaut-induced loads and motions during long-duration missions on the International Space Station (ISS), which will lead to appropriate countermeasures. The primary objectives of the Microgravity Investigation of Crew Reactions in 0-G (MICRO-GI research effort are to quantify astronaut adaptation and movement as well as to model motor strategies for differing gravity environments. The overall goal of this research program is to improve astronaut performance and efficiency through the use of rigorous quantitative dynamic analysis, simulation and experimentation. The MICRO-G research effort provides a modular, kinetic and kinematic capability for the ISS. The collection and evaluation of kinematics (whole-body motion) and dynamics (reacting forces and torques) of astronauts within the ISS will allow for quantification of human motion and performance in weightlessness, gathering fundamental human factors information for design, scientific investigation in the field of dynamics and motor control, technological assessment of microgravity disturbances, and the design of miniaturized, real-time space systems. The proposed research effort builds on a strong foundation of successful microgravity experiments, namely, the EDLS (Enhanced Dynamics Load Sensors) flown aboard the Russian Mir space station (19961998) and the DLS (Dynamic Load Sensors) flown on Space Shuttle Mission STS-62. In addition, previously funded NASA ground-based research into sensor technology development and development of algorithms to produce three-dimensional (3-0) kinematics from video images have come to fruition and these efforts culminate in the proposed collaborative MICRO-G flight experiment. The required technology and hardware capitalize on previous sensor design, fabrication, and testing and can be flight qualified for a fraction of the cost of an initial spaceflight experiment. Four dynamic load sensors/restraints are envisioned for measurement of astronaut forces and torques. Two standard ISS video cameras record typical astronaut operations and prescribed IVA motions for 3-D kinematics. Forces and kinematics are combined for dynamic analysis of astronaut motion, exploiting the results of the detailed dynamic modeling effort for the quantitative verification of astronaut IVA performance, induced-loads, and adaptive control strategies for crewmember whole-body motion in microgravity. This comprehensive effort, provides an enhanced human factors approach based on physics-based modeling to identify adaptive performance during long-duration spaceflight, which is critically important for astronaut training as well as providing a spaceflight database to drive countermeasure design.
Design and analysis of an underactuated anthropomorphic finger for upper limb prosthetics.
Omarkulov, Nurdos; Telegenov, Kuat; Zeinullin, Maralbek; Begalinova, Ainur; Shintemirov, Almas
2015-01-01
This paper presents the design of a linkage based finger mechanism ensuring extended range of anthropomorphic gripping motions. The finger design is done using a path-point generation method based on geometrical dimensions and motion of a typical index human finger. Following the design description, and its kinematics analysis, the experimental evaluation of the finger gripping performance is presented using the finger 3D printed prototype. The finger underactuation is achieved by utilizing mechanical linkage system, consisting of two crossed four-bar linkage mechanisms. It is shown that the proposed finger design can be used to design a five-fingered anthropomorphic hand and has the potential for upper limb prostheses development.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait.
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E; Del-Ama, Antonio J; Dimbwadyo, Iris; Moreno, Juan C; Florez, Julian; Pons, Jose L
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton.
A Subject-Specific Kinematic Model to Predict Human Motion in Exoskeleton-Assisted Gait
Torricelli, Diego; Cortés, Camilo; Lete, Nerea; Bertelsen, Álvaro; Gonzalez-Vargas, Jose E.; del-Ama, Antonio J.; Dimbwadyo, Iris; Moreno, Juan C.; Florez, Julian; Pons, Jose L.
2018-01-01
The relative motion between human and exoskeleton is a crucial factor that has remarkable consequences on the efficiency, reliability and safety of human-robot interaction. Unfortunately, its quantitative assessment has been largely overlooked in the literature. Here, we present a methodology that allows predicting the motion of the human joints from the knowledge of the angular motion of the exoskeleton frame. Our method combines a subject-specific skeletal model with a kinematic model of a lower limb exoskeleton (H2, Technaid), imposing specific kinematic constraints between them. To calibrate the model and validate its ability to predict the relative motion in a subject-specific way, we performed experiments on seven healthy subjects during treadmill walking tasks. We demonstrate a prediction accuracy lower than 3.5° globally, and around 1.5° at the hip level, which represent an improvement up to 66% compared to the traditional approach assuming no relative motion between the user and the exoskeleton. PMID:29755336
Thaler, Lore; Todd, James T; Spering, Miriam; Gegenfurtner, Karl R
2007-04-20
Four experiments in which observers judged the apparent "rubberiness" of a line segment undergoing different types of rigid motion are reported. The results reveal that observers perceive illusory bending when the motion involves certain combinations of translational and rotational components and that the illusion is maximized when these components are presented at a frequency of approximately 3 Hz with a relative phase angle of approximately 120 degrees . Smooth pursuit eye movements can amplify or attenuate the illusion, which is consistent with other results reported in the literature that show effects of eye movements on perceived image motion. The illusion is unaffected by background motion that is in counterphase with the motion of the line segment but is significantly attenuated by background motion that is in-phase. This is consistent with the idea that human observers integrate motion signals within a local frame of reference, and it provides strong evidence that visual persistency cannot be the sole cause of the illusion as was suggested by J. R. Pomerantz (1983). An analysis of the motion patterns suggests that the illusory bending motion may be due to an inability of observers to accurately track the motions of features whose image displacements undergo rapid simultaneous changes in both space and time. A measure of these changes is presented, which is highly correlated with observers' numerical ratings of rubberiness.
Burnecki, Krzysztof; Kepten, Eldad; Janczura, Joanna; Bronshtein, Irena; Garini, Yuval; Weron, Aleksander
2012-01-01
We present a systematic statistical analysis of the recently measured individual trajectories of fluorescently labeled telomeres in the nucleus of living human cells. The experiments were performed in the U2OS cancer cell line. We propose an algorithm for identification of the telomere motion. By expanding the previously published data set, we are able to explore the dynamics in six time orders, a task not possible earlier. As a result, we establish a rigorous mathematical characterization of the stochastic process and identify the basic mathematical mechanisms behind the telomere motion. We find that the increments of the motion are stationary, Gaussian, ergodic, and even more chaotic—mixing. Moreover, the obtained memory parameter estimates, as well as the ensemble average mean square displacement reveal subdiffusive behavior at all time spans. All these findings statistically prove a fractional Brownian motion for the telomere trajectories, which is confirmed by a generalized p-variation test. Taking into account the biophysical nature of telomeres as monomers in the chromatin chain, we suggest polymer dynamics as a sufficient framework for their motion with no influence of other models. In addition, these results shed light on other studies of telomere motion and the alternative telomere lengthening mechanism. We hope that identification of these mechanisms will allow the development of a proper physical and biological model for telomere subdynamics. This array of tests can be easily implemented to other data sets to enable quick and accurate analysis of their statistical characteristics. PMID:23199912
Llamas, César; González, Manuel A; Hernández, Carmen; Vegas, Jesús
2016-10-01
Nearly every practical improvement in modeling human motion is well founded in a properly designed collection of data or datasets. These datasets must be made publicly available for the community could validate and accept them. It is reasonable to concede that a collective, guided enterprise could serve to devise solid and substantial datasets, as a result of a collaborative effort, in the same sense as the open software community does. In this way datasets could be complemented, extended and expanded in size with, for example, more individuals, samples and human actions. For this to be possible some commitments must be made by the collaborators, being one of them sharing the same data acquisition platform. In this paper, we offer an affordable open source hardware and software platform based on inertial wearable sensors in a way that several groups could cooperate in the construction of datasets through common software suitable for collaboration. Some experimental results about the throughput of the overall system are reported showing the feasibility of acquiring data from up to 6 sensors with a sampling frequency no less than 118Hz. Also, a proof-of-concept dataset is provided comprising sampled data from 12 subjects suitable for gait analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.
1986-01-01
A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)
1995-01-01
We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.
Hayakawa, Tomohiro; Kunihiro, Takeshi; Ando, Tomoko; Kobayashi, Seiji; Matsui, Eriko; Yada, Hiroaki; Kanda, Yasunari; Kurokawa, Junko; Furukawa, Tetsushi
2014-12-01
In this study, we used high-speed video microscopy with motion vector analysis to investigate the contractile characteristics of hiPS-CM monolayer, in addition to further characterizing the motion with extracellular field potential (FP), traction force and the Ca(2+) transient. Results of our traction force microscopy demonstrated that the force development of hiPS-CMs correlated well with the cellular deformation detected by the video microscopy with motion vector analysis. In the presence of verapamil and isoproterenol, contractile motion of hiPS-CMs showed alteration in accordance with the changes in fluorescence peak of the Ca(2+) transient, i.e., upstroke, decay, amplitude and full-width at half-maximum. Simultaneously recorded hiPS-CM motion and FP showed that there was a linear correlation between changes in the motion and field potential duration in response to verapamil (30-150nM), isoproterenol (0.1-10μM) and E-4031 (10-50nM). In addition, tetrodotoxin (3-30μM)-induced delay of sodium current was corresponded with the delay of the contraction onset of hiPS-CMs. These results indicate that the electrophysiological and functional behaviors of hiPS-CMs are quantitatively reflected in the contractile motion detected by this image-based technique. In the presence of 100nM E-4031, the occurrence of early after-depolarization-like negative deflection in FP was also detected in the hiPS-CM motion as a characteristic two-step relaxation pattern. These findings offer insights into the interpretation of the motion kinetics of the hiPS-CMs, and are relevant for understanding electrical and mechanical relationship in hiPS-CMs. Copyright © 2014. Published by Elsevier Ltd.
Motion synthesis and force distribution analysis for a biped robot.
Trojnacki, Maciej T; Zielińska, Teresa
2011-01-01
In this paper, the method of generating biped robot motion using recorded human gait is presented. The recorded data were modified taking into account the velocity available for robot drives. Data includes only selected joint angles, therefore the missing values were obtained considering the dynamic postural stability of the robot, which means obtaining an adequate motion trajectory of the so-called Zero Moment Point (ZMT). Also, the method of determining the ground reaction forces' distribution during the biped robot's dynamic stable walk is described. The method was developed by the authors. Following the description of equations characterizing the dynamics of robot's motion, the values of the components of ground reaction forces were symbolically determined as well as the coordinates of the points of robot's feet contact with the ground. The theoretical considerations have been supported by computer simulation and animation of the robot's motion. This was done using Matlab/Simulink package and Simulink 3D Animation Toolbox, and it has proved the proposed method.
Vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, S. R.; Sullivan, R. B.; Young, L. R.
1986-01-01
The use of spatial orientation models in the design and evaluation of control systems for motion-base flight simulators is investigated experimentally. The development of a high-fidelity motion drive controller using an optimal control approach based on human vestibular models is described. The formulation and implementation of the optimal washout system are discussed. The effectiveness of the motion washout system was evaluated by studying the response of six motion washout systems to the NASA/AMES Vertical Motion Simulator for a single dash-quick-stop maneuver. The effects of the motion washout system on pilot performance and simulator acceptability are examined. The data reveal that human spatial orientation models are useful for the design and evaluation of flight simulator motion fidelity.
Roger's pattern manifestations and health in adolescents.
Yarcheski, A; Mahon, N E
1995-08-01
The purpose of this exploratory study was to examine four manifestations of human-environmental field patterning--human field motion, human field rhythms, creativity, and sentience--in relation to perceived health status in 106 early, 111 middle, and 113 late adolescents. Participants responded to the Perceived Field Motion Instrument (a measure of human field motion), the Human Field Rhythms Scale, the Sentience Scale, the General Health Rating Index (a measure of perceived health status), and a brief demographic data sheet in classroom settings. Data were analyzed using Pearson correlations. Statistically significant positive correlations were found between perceived field motion and perceived health status in early, middle, and late adolescents, between human field rhythms and perceived health status in late adolescents only, and between creativity and perceived health status in late adolescents only. The inverse relationship found between sentience and perceived health status in early, middle, and late adolescents was not statistically significant. The findings are interpreted within a Rogerian framework.
Kinematics and Dynamics of Motion Control Based on Acceleration Control
NASA Astrophysics Data System (ADS)
Ohishi, Kiyoshi; Ohba, Yuzuru; Katsura, Seiichiro
The first IEEE International Workshop on Advanced Motion Control was held in 1990 pointed out the importance of physical interpretation of motion control. The software servoing technology is now common in machine tools, robotics, and mechatronics. It has been intensively developed for the numerical control (NC) machines. Recently, motion control in unknown environment will be more and more important. Conventional motion control is not always suitable due to the lack of adaptive capability to the environment. A more sophisticated ability in motion control is necessary for compliant contact with environment. Acceleration control is the key technology of motion control in unknown environment. The acceleration control can make a motion system to be a zero control stiffness system without losing the robustness. Furthermore, a realization of multi-degree-of-freedom motion is necessary for future human assistance. A human assistant motion will require various control stiffness corresponding to the task. The review paper focuses on the modal coordinate system to integrate the various control stiffness in the virtual axes. A bilateral teleoperation is a good candidate to consider the future human assistant motion and integration of decentralized systems. Thus the paper reviews and discusses the bilateral teleoperation from the control stiffness and the modal control design points of view.
Human-like object tracking and gaze estimation with PKD android
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.
2018-01-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193
Human-like object tracking and gaze estimation with PKD android
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.
2016-05-01
As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.
Motion cue effects on human pilot dynamics in manual control
NASA Technical Reports Server (NTRS)
Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.
1977-01-01
Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.
LaPrè, A K; Price, M A; Wedge, R D; Umberger, B R; Sup, Frank C
2018-04-01
Musculoskeletal modeling and marker-based motion capture techniques are commonly used to quantify the motions of body segments, and the forces acting on them during human gait. However, when these techniques are applied to analyze the gait of people with lower limb loss, the clinically relevant interaction between the residual limb and prosthesis socket is typically overlooked. It is known that there is considerable motion and loading at the residuum-socket interface, yet traditional gait analysis techniques do not account for these factors due to the inability to place tracking markers on the residual limb inside of the socket. In the present work, we used a global optimization technique and anatomical constraints to estimate the motion and loading at the residuum-socket interface as part of standard gait analysis procedures. We systematically evaluated a range of parameters related to the residuum-socket interface, such as the number of degrees of freedom, and determined the configuration that yields the best compromise between faithfully tracking experimental marker positions while yielding anatomically realistic residuum-socket kinematics and loads that agree with data from the literature. Application of the present model to gait analysis for people with lower limb loss will deepen our understanding of the biomechanics of walking with a prosthesis, which should facilitate the development of enhanced rehabilitation protocols and improved assistive devices. Copyright © 2017 John Wiley & Sons, Ltd.
An examination of the degrees of freedom of human jaw motion in speech and mastication.
Ostry, D J; Vatikiotis-Bateson, E; Gribble, P L
1997-12-01
The kinematics of human jaw movements were assessed in terms of the three orientation angles and three positions that characterize the motion of the jaw as a rigid body. The analysis focused on the identification of the jaw's independent movement dimensions, and was based on an examination of jaw motion paths that were plotted in various combinations of linear and angular coordinate frames. Overall, both behaviors were characterized by independent motion in four degrees of freedom. In general, when jaw movements were plotted to show orientation in the sagittal plane as a function of horizontal position, relatively straight paths were observed. In speech, the slopes and intercepts of these paths varied depending on the phonetic material. The vertical position of the jaw was observed to shift up or down so as to displace the overall form of the sagittal plane motion path of the jaw. Yaw movements were small but independent of pitch, and vertical and horizontal position. In mastication, the slope and intercept of the relationship between pitch and horizontal position were affected by the type of food and its size. However, the range of variation was less than that observed in speech. When vertical jaw position was plotted as a function of horizontal position, the basic form of the path of the jaw was maintained but could be shifted vertically. In general, larger bolus diameters were associated with lower jaw positions throughout the movement. The timing of pitch and yaw motion differed. The most common pattern involved changes in pitch angle during jaw opening followed by a phase predominated by lateral motion (yaw). Thus, in both behaviors there was evidence of independent motion in pitch, yaw, horizontal position, and vertical position. This is consistent with the idea that motions in these degrees of freedom are independently controlled.
Exhibition of stochastic resonance in vestibular tilt motion perception.
Galvan-Garza, R C; Clark, T K; Mulavara, A P; Oman, C M
2018-04-03
Stochastic Resonance (SR) is a phenomenon broadly described as "noise benefit". The application of subsensory electrical Stochastic Vestibular Stimulation (SVS) via electrodes behind each ear has been used to improve human balance and gait, but its effect on motion perception thresholds has not been examined. This study investigated the capability of subsensory SVS to reduce vestibular motion perception thresholds in a manner consistent with a characteristic bell-shaped SR curve. We measured upright, head-centered, roll tilt Direction Recognition (DR) thresholds in the dark in 12 human subjects with the application of wideband 0-30 Hz SVS ranging from ±0-700 μA. To conservatively assess if SR was exhibited, we compared the proportions of both subjective and statistical SR exhibition in our experimental data to proportions of SR exhibition in multiple simulation cases with varying underlying SR behavior. Analysis included individual and group statistics. As there is not an established mathematical definition, three humans subjectively judged that SR was exhibited in 78% of subjects. "Statistically significant SR exhibition", which additionally required that a subject's DR threshold with SVS be significantly lower than baseline (no SVS), was present in 50% of subjects. Both percentages were higher than simulations suggested could occur simply by chance. For SR exhibitors, defined by subjective or statistically significant criteria, the mean DR threshold improved by -30% and -39%, respectively. The largest individual improvement was -47%. At least half of the subjects were better able to perceive passive body motion with the application of subsensory SVS. This study presents the first conclusive demonstration of SR in vestibular motion perception. Copyright © 2018 Elsevier Inc. All rights reserved.
Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.
Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan
2018-04-01
Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.
NASA Technical Reports Server (NTRS)
Lee, Mun Wai
2015-01-01
Crew exercise is important during long-duration space flight not only for maintaining health and fitness but also for preventing adverse health problems, such as losses in muscle strength and bone density. Monitoring crew exercise via motion capture and kinematic analysis aids understanding of the effects of microgravity on exercise and helps ensure that exercise prescriptions are effective. Intelligent Automation, Inc., has developed ESPRIT to monitor exercise activities, detect body markers, extract image features, and recover three-dimensional (3D) kinematic body poses. The system relies on prior knowledge and modeling of the human body and on advanced statistical inference techniques to achieve robust and accurate motion capture. In Phase I, the company demonstrated motion capture of several exercises, including walking, curling, and dead lifting. Phase II efforts focused on enhancing algorithms and delivering an ESPRIT prototype for testing and demonstration.
Human torso phantom for imaging of heart with realistic modes of cardiac and respiratory motion
Boutchko, Rostyslav; Balakrishnan, Karthikayan; Gullberg, Grant T; O& #x27; Neil, James P
2013-09-17
A human torso phantom and its construction, wherein the phantom mimics respiratory and cardiac cycles in a human allowing acquisition of medical imaging data under conditions simulating patient cardiac and respiratory motion.
Human Activity Modeling and Simulation with High Biofidelity
2013-01-01
Human activity Modeling and Simulation (M&S) plays an important role in simulation-based training and Virtual Reality (VR). However, human activity M...kinematics and motion mapping/creation; and (e) creation and replication of human activity in 3-D space with true shape and motion. A brief review is
Triboelectrification based motion sensor for human-machine interfacing.
Yang, Weiqing; Chen, Jun; Wen, Xiaonan; Jing, Qingshen; Yang, Jin; Su, Yuanjie; Zhu, Guang; Wu, Wenzuo; Wang, Zhong Lin
2014-05-28
We present triboelectrification based, flexible, reusable, and skin-friendly dry biopotential electrode arrays as motion sensors for tracking muscle motion and human-machine interfacing (HMI). The independently addressable, self-powered sensor arrays have been utilized to record the electric output signals as a mapping figure to accurately identify the degrees of freedom as well as directions and magnitude of muscle motions. A fast Fourier transform (FFT) technique was employed to analyse the frequency spectra of the obtained electric signals and thus to determine the motion angular velocities. Moreover, the motion sensor arrays produced a short-circuit current density up to 10.71 mA/m(2), and an open-circuit voltage as high as 42.6 V with a remarkable signal-to-noise ratio up to 1000, which enables the devices as sensors to accurately record and transform the motions of the human joints, such as elbow, knee, heel, and even fingers, and thus renders it a superior and unique invention in the field of HMI.
Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications
NASA Astrophysics Data System (ADS)
Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.
2002-08-01
We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.
How long did it last? You would better ask a human
Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694
How long did it last? You would better ask a human.
Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
Relative effects of posture and activity on human height estimation from surveillance footage.
Ramstrand, Nerrolyn; Ramstrand, Simon; Brolund, Per; Norell, Kristin; Bergström, Peter
2011-10-10
Height estimations based on security camera footage are often requested by law enforcement authorities. While valid and reliable techniques have been established to determine vertical distances from video frames, there is a discrepancy between a person's true static height and their height as measured when assuming different postures or when in motion (e.g., walking). The aim of the research presented in this report was to accurately record the height of subjects as they performed a variety of activities typically observed in security camera footage and compare results to height recorded using a standard height measuring device. Forty-six able bodied adults participated in this study and were recorded using a 3D motion analysis system while performing eight different tasks. Height measurements captured using the 3D motion analysis system were compared to static height measurements in order to determine relative differences. It is anticipated that results presented in this report can be used by forensic image analysis experts as a basis for correcting height estimations of people captured on surveillance footage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
IMU-Based Joint Angle Measurement for Gait Analysis
Seel, Thomas; Raisch, Jorg; Schauer, Thomas
2014-01-01
This contribution is concerned with joint angle calculation based on inertial measurement data in the context of human motion analysis. Unlike most robotic devices, the human body lacks even surfaces and right angles. Therefore, we focus on methods that avoid assuming certain orientations in which the sensors are mounted with respect to the body segments. After a review of available methods that may cope with this challenge, we present a set of new methods for: (1) joint axis and position identification; and (2) flexion/extension joint angle measurement. In particular, we propose methods that use only gyroscopes and accelerometers and, therefore, do not rely on a homogeneous magnetic field. We provide results from gait trials of a transfemoral amputee in which we compare the inertial measurement unit (IMU)-based methods to an optical 3D motion capture system. Unlike most authors, we place the optical markers on anatomical landmarks instead of attaching them to the IMUs. Root mean square errors of the knee flexion/extension angles are found to be less than 1° on the prosthesis and about 3° on the human leg. For the plantar/dorsiflexion of the ankle, both deviations are about 1°. PMID:24743160
Centralized Networks to Generate Human Body Motions
Vakulenko, Sergei; Radulescu, Ovidiu; Morozov, Ivan
2017-01-01
We consider continuous-time recurrent neural networks as dynamical models for the simulation of human body motions. These networks consist of a few centers and many satellites connected to them. The centers evolve in time as periodical oscillators with different frequencies. The center states define the satellite neurons’ states by a radial basis function (RBF) network. To simulate different motions, we adjust the parameters of the RBF networks. Our network includes a switching module that allows for turning from one motion to another. Simulations show that this model allows us to simulate complicated motions consisting of many different dynamical primitives. We also use the model for learning human body motion from markers’ trajectories. We find that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that our technique seems to be capable of correcting for missing information resulting from sparse control marker settings. PMID:29240694
Centralized Networks to Generate Human Body Motions.
Vakulenko, Sergei; Radulescu, Ovidiu; Morozov, Ivan; Weber, Andres
2017-12-14
We consider continuous-time recurrent neural networks as dynamical models for the simulation of human body motions. These networks consist of a few centers and many satellites connected to them. The centers evolve in time as periodical oscillators with different frequencies. The center states define the satellite neurons' states by a radial basis function (RBF) network. To simulate different motions, we adjust the parameters of the RBF networks. Our network includes a switching module that allows for turning from one motion to another. Simulations show that this model allows us to simulate complicated motions consisting of many different dynamical primitives. We also use the model for learning human body motion from markers' trajectories. We find that center frequencies can be learned from a small number of markers and can be transferred to other markers, such that our technique seems to be capable of correcting for missing information resulting from sparse control marker settings.
Whole-body patterns of the range of joint motion in young adults: masculine type and feminine type.
Moromizato, Keiichi; Kimura, Ryosuke; Fukase, Hitoshi; Yamaguchi, Kyoko; Ishida, Hajime
2016-10-01
Understanding the whole-body patterns of joint flexibility and their related biological and physical factors contributes not only to clinical assessments but also to the fields of human factors and ergonomics. In this study, ranges of motion (ROMs) at limb and trunk joints of young adults were analysed to understand covariation patterns of different joint motions and to identify factors associated with the variation in ROM. Seventy-eight healthy volunteers (42 males and 36 females) living on Okinawa Island, Japan, were recruited. Passive ROM was measured at multiple joints through the whole body (31 measurements) including the left and right side limbs and trunk. Comparisons between males and females, dominant and non-dominant sides, and antagonistic motions indicated that body structures influence ROMs. In principal component analysis (PCA) on the ROM data, the first principal component (PC1) represented the sex difference and a similar covariation pattern appeared in the analysis within each sex. Multiple regression analysis showed that this component was associated with sex, age, body fat %, iliospinale height, and leg extension strength. The present study identified that there is a spectrum of "masculine" and "feminine" types in the whole-body patterns of joint flexibility. This study also suggested that body proportion and composition, muscle mass and strength, and possibly skeletal structures partly explain such patterns. These results would be important to understand individual variation in susceptibility to joint injuries and diseases and in one's suitable and effective postures and motions.
NASA Astrophysics Data System (ADS)
Vatansever, Sezen; Gümüş, Zeynep H.; Erman, Burak
2016-11-01
K-Ras is the most frequently mutated oncogene in human cancers, but there are still no drugs that directly target it in the clinic. Recent studies utilizing dynamics information show promising results for selectively targeting mutant K-Ras. However, despite extensive characterization, the mechanisms by which K-Ras residue fluctuations transfer allosteric regulatory information remain unknown. Understanding the direction of information flow can provide new mechanistic insights for K-Ras targeting. Here, we present a novel approach -conditional time-delayed correlations (CTC) - using the motions of all residue pairs of a protein to predict directionality in the allosteric regulation of the protein fluctuations. Analyzing nucleotide-dependent intrinsic K-Ras motions with the new approach yields predictions that agree with the literature, showing that GTP-binding stabilizes K-Ras motions and leads to residue correlations with relatively long characteristic decay times. Furthermore, our study is the first to identify driver-follower relationships in correlated motions of K-Ras residue pairs, revealing the direction of information flow during allosteric modulation of its nucleotide-dependent intrinsic activity: active K-Ras Switch-II region motions drive Switch-I region motions, while α-helix-3L7 motions control both. Our results provide novel insights for strategies that directly target mutant K-Ras.
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
The neurophysiology of biological motion perception in schizophrenia
Jahshan, Carol; Wynn, Jonathan K; Mathis, Kristopher I; Green, Michael F
2015-01-01
Introduction The ability to recognize human biological motion is a fundamental aspect of social cognition that is impaired in people with schizophrenia. However, little is known about the neural substrates of impaired biological motion perception in schizophrenia. In the current study, we assessed event-related potentials (ERPs) to human and nonhuman movement in schizophrenia. Methods Twenty-four subjects with schizophrenia and 18 healthy controls completed a biological motion task while their electroencephalography (EEG) was simultaneously recorded. Subjects watched clips of point-light animations containing 100%, 85%, or 70% biological motion, and were asked to decide whether the clip resembled human or nonhuman movement. Three ERPs were examined: P1, N1, and the late positive potential (LPP). Results Behaviorally, schizophrenia subjects identified significantly fewer stimuli as human movement compared to healthy controls in the 100% and 85% conditions. At the neural level, P1 was reduced in the schizophrenia group but did not differ among conditions in either group. There were no group differences in N1 but both groups had the largest N1 in the 70% condition. There was a condition × group interaction for the LPP: Healthy controls had a larger LPP to 100% versus 85% and 70% biological motion; there was no difference among conditions in schizophrenia subjects. Conclusions Consistent with previous findings, schizophrenia subjects were impaired in their ability to recognize biological motion. The EEG results showed that biological motion did not influence the earliest stage of visual processing (P1). Although schizophrenia subjects showed the same pattern of N1 results relative to healthy controls, they were impaired at a later stage (LPP), reflecting a dysfunction in the identification of human form in biological versus nonbiological motion stimuli. PMID:25722951
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.
Future of Mechatronics and Human
NASA Astrophysics Data System (ADS)
Harashima, Fumio; Suzuki, Satoshi
This paper mentions circumstance of mechatronics that sustain our human society, and introduces HAM(Human Adaptive Mechatronics)-project as one of research projects to create new human-machine system. The key point of HAM is skill, and analysis of skill and establishment of assist method to enhance total performance of human-machine system are main research concerns. As study of skill is an elucidation of human itself, analyses of human higher function are significant. In this paper, after surveying researches of human brain functions, an experimental analysis of human characteristic in machine operation is shown as one example of our research activities. We used hovercraft simulator as verification system including observation, voluntary motion control and machine operation that are needed to general machine operation. Process and factors to become skilled were investigated by identification of human control characteristics with measurement of the operator's line-of sight. It was confirmed that early switching of sub-controllers / reference signals in human and enhancement of space perception are significant.
NASA Astrophysics Data System (ADS)
Kaida, Yukiko; Murakami, Toshiyuki
A wheelchair is an important apparatus of mobility for people with disability. Power-assist motion in an electric wheelchair is to expand the operator's field of activities. This paper describes force sensorless detection of human input torque. Reaction torque estimation observer calculates the total disturbance torque first. Then, the human input torque is extracted from the estimated disturbance. In power-assist motion, assist torque is synthesized according to the product of assist gain and the average torque of the right and left input torque. Finally, the proposed method is verified through the experiments of power-assist motion.
A Feasibility Study of View-independent Gait Identification
2012-03-01
ice skates . For walking, the footprint records for single pixels form clusters that are well separated in space and time. (Any overlap of contact...Pattern Recognition 2007, 1-8. Cheng M-H, Ho M-F & Huang C-L (2008), "Gait Analysis for Human Identification Through Manifold Learning and HMM... Learning and Cybernetics 2005, 4516-4521 Moeslund T B & Granum E (2001), "A Survey of Computer Vision-Based Human Motion Capture", Computer Vision
NASA Astrophysics Data System (ADS)
Jeon, S. M.; Jang, G. H.; Choi, H. C.; Park, S. H.; Park, J. O.
2012-04-01
Different magnetic navigation systems (MNSs) have been investigated for the wireless manipulation of microrobots in human blood vessels. Here we propose a MNS and methodology for generation of both the precise helical and translational motions of a microrobot to improve its maneuverability in complex human blood vessel. We then present experiments demonstrating the helical and translational motions of a spiral-type microrobot to verify the proposed MNS.
Methodology for estimating human perception to tremors in high-rise buildings
NASA Astrophysics Data System (ADS)
Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien
2017-07-01
Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.
Development of a Mandibular Motion Simulator for Total Joint Replacement
Celebi, Nukhet; Rohner, E. Carlos; Gateno, Jaime; Noble, Philip C.; Ismaily, Sabir K.; Teichgraeber, John F.; Xia, James J.
2015-01-01
Purpose The purpose of this study was to develop a motion simulator capable of recreating and recording the full range of mandibular motions in a cadaveric preparation for an intact temporomandibular joint (TMJ) and after total joint replacement. Material and Methods A human cadaver head was used. Two sets of tracking balls were attached to the forehead and mandible, respectively. Computed tomographic (CT) scan was performed and 3-dimensional CT models of the skull were generated. The cadaver head was then dissected to attach the muscle activation cables and mounted onto the TMJ simulator. Realistic jaw motions were generated through the application of the following muscle forces: lateral pterygoid muscle, suprahyoid depressors (geniohyoid, mylohyoid, and digastric muscles), and elevator muscles. To simulate muscle contraction, cables were inserted into the mandible at the center area of each muscle's attachment. To provide a minimum mouth closing force at the initial position, the elevator muscles were combined at the anterior mandible. During mandibular movement, each motion was recorded using a high-resolution laser scanner. The right TMJ of the same head was reconstructed with a total TMJ prosthesis. The same forces were applied and the jaw motions were recorded again. CT scan was performed and 3-dimensional CT models of the skull with TMJ prosthesis were generated. Results Mandibular motions, before and after TMJ replacement, with and without lateral pterygoid muscle reattachment, were re-created in a cadaveric preparation. The laser-scanned data during the mandibular motion were used to drive 3-dimensional CT models. A movie for each mandibular motion was subsequently created for motion path analysis. Compared with mandibular motion before TMJ replacement, mandibular lateral and protrusive motions after TMJ replacement, with and without lateral pterygoid muscle reattachment, were greatly limited. The jaw motion recorded before total joint replacement was applied to the mandibular and prostheses models after total TMJ replacement. The condylar component was observed sinking into the fossa during jaw motion. Conclusion A motion simulator capable of re-creating and recording full range of mandibular motions in a cadaveric preparation has been developed. It can be used to simulate mandibular motions for the intact TMJ and total joint prosthesis, and to re-create and record their full range of mandibular motions. In addition, the full range of the recorded motion can be re-created as motion images in a computer. These images can be used for motion path analysis and to study the causation of limited range of motion after total joint replacement and strategies for improvement. PMID:21050636
A New Approach for Human Forearm Motion Assist by Actuated Artificial Joint-An Inner Skeleton Robot
NASA Astrophysics Data System (ADS)
Kundu, Subrata Kumar; Kiguchi, Kazuo; Teramoto, Kenbu
In order to help the physical activities of the elderly or physically disabled persons, we propose a new concept of a power-assist inner skeleton robot (i.e., actuated artificial joint) that is supposed to assist the human daily life motion from inside of the human body. This paper presents an implantable 2 degree of freedom (DOF) inner skeleton robot that is designed to assist human elbow flexion-extension motion and forearm supination-pronation motion for daily life activities. We have developed a prototype of the inner skeleton robot that is supposed to assist the motion from inside of the body and act as an actuated artificial joint. The proposed system is controlled based on the activation patterns of the electromyogram (EMG) signals of the user's muscles by applying fuzzy-neuro control method. A joint actuator with angular position sensor is designed for the inner skeleton robot and a T-Mechanism is proposed to keep the bone arrangement similar to the normal human articulation after the elbow arthroplasty. The effectiveness of the proposed system has been evaluated by experiment.
Decreased reward value of biological motion among individuals with autistic traits.
Williams, Elin H; Cross, Emily S
2018-02-01
The Social Motivation Theory posits that a reduced sensitivity to the value of social stimuli, specifically faces, can account for social impairments in Autism Spectrum Disorders (ASD). Research has demonstrated that typically developing (TD) individuals preferentially orient towards another type of salient social stimulus, namely biological motion. Individuals with ASD, however, do not show this preference. While the reward value of faces to both TD and ASD individuals has been well-established, the extent to which individuals from these populations also find human motion to be rewarding remains poorly understood. The present study investigated the value assigned to biological motion by TD participants in an effort task, and further examined whether these values differed among individuals with more autistic traits. The results suggest that TD participants value natural human motion more than rigid, machine-like motion or non-human control motion, but this preference is attenuated among individuals reporting more autistic traits. This study provides the first evidence to suggest that individuals with more autistic traits find a broader conceptualisation of social stimuli less rewarding compared to individuals with fewer autistic traits. By quantifying the social reward value of human motion, the present findings contribute an important piece to our understanding of social motivation in individuals with and without social impairments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-01-01
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987
Burnecki, Krzysztof; Kepten, Eldad; Janczura, Joanna; Bronshtein, Irena; Garini, Yuval; Weron, Aleksander
2012-11-07
We present a systematic statistical analysis of the recently measured individual trajectories of fluorescently labeled telomeres in the nucleus of living human cells. The experiments were performed in the U2OS cancer cell line. We propose an algorithm for identification of the telomere motion. By expanding the previously published data set, we are able to explore the dynamics in six time orders, a task not possible earlier. As a result, we establish a rigorous mathematical characterization of the stochastic process and identify the basic mathematical mechanisms behind the telomere motion. We find that the increments of the motion are stationary, Gaussian, ergodic, and even more chaotic--mixing. Moreover, the obtained memory parameter estimates, as well as the ensemble average mean square displacement reveal subdiffusive behavior at all time spans. All these findings statistically prove a fractional Brownian motion for the telomere trajectories, which is confirmed by a generalized p-variation test. Taking into account the biophysical nature of telomeres as monomers in the chromatin chain, we suggest polymer dynamics as a sufficient framework for their motion with no influence of other models. In addition, these results shed light on other studies of telomere motion and the alternative telomere lengthening mechanism. We hope that identification of these mechanisms will allow the development of a proper physical and biological model for telomere subdynamics. This array of tests can be easily implemented to other data sets to enable quick and accurate analysis of their statistical characteristics. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Dutta, Debaditya; Mahmoud, Ahmed M.; Leers, Steven A.; Kim, Kang
2013-01-01
Large lipid pools in vulnerable plaques, in principle, can be detected using US based thermal strain imaging (US-TSI). One practical challenge for in vivo cardiovascular application of US-TSI is that the thermal strain is masked by the mechanical strain caused by cardiac pulsation. ECG gating is a widely adopted method for cardiac motion compensation, but it is often susceptible to electrical and physiological noise. In this paper, we present an alternative time series analysis approach to separate thermal strain from the mechanical strain without using ECG. The performance and feasibility of the time-series analysis technique was tested via numerical simulation as well as in vitro water tank experiments using a vessel mimicking phantom and an excised human atherosclerotic artery where the cardiac pulsation is simulated by a pulsatile pump. PMID:24808628
Orion Multi-Purpose Crew Vehicle Solving and Mitigating the Two Main Cluster Pendulum Problem
NASA Technical Reports Server (NTRS)
Ali, Yasmin; Sommer, Bruce; Troung, Tuan; Anderson, Brian; Madsen, Christopher
2017-01-01
The Orion Multi-purpose Crew Vehicle (MPCV) Orion spacecraft will return humans from beyond earth's orbit, including Mars and will be required to land 20,000 pounds of mass safely in the ocean. The parachute system nominally lands under 3 main parachutes, but the system is designed to be fault tolerant and land under 2 main parachutes. During several of the parachute development tests, it was observed that a pendulum, or swinging, motion could develop while the Crew Module (CM) was descending under two parachutes. This pendulum effect had not been previously predicted by modeling. Landing impact analysis showed that the landing loads would double in some places across the spacecraft. The CM structural design limits would be exceeded upon landing if this pendulum motion were to occur. The Orion descent and landing team was faced with potentially millions of dollars in structural modifications and a severe mass increase. A multidisciplinary team was formed to determine root cause, model the pendulum motion, study alternate canopy planforms and assess alternate operational vehicle controls & operations providing mitigation options resulting in a reliability level deemed safe for human spaceflight. The problem and solution is a balance of risk to a known solution versus a chance to improve the landing performance for the next human-rated spacecraft.
21 CFR 892.1540 - Nonfetal ultrasonic monitor.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Nonfetal ultrasonic monitor. 892.1540 Section 892.1540 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... tissues in motion. This generic type of device may include signal analysis and display equipment, patient...
21 CFR 892.1540 - Nonfetal ultrasonic monitor.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Nonfetal ultrasonic monitor. 892.1540 Section 892.1540 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... tissues in motion. This generic type of device may include signal analysis and display equipment, patient...
21 CFR 892.1540 - Nonfetal ultrasonic monitor.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Nonfetal ultrasonic monitor. 892.1540 Section 892.1540 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... tissues in motion. This generic type of device may include signal analysis and display equipment, patient...
Development of real-time motion capture system for 3D on-line games linked with virtual character
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck
2004-10-01
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.
Durant, Szonya; Wall, Matthew B; Zanker, Johannes M
2011-09-09
Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.
Human Centered Hardware Modeling and Collaboration
NASA Technical Reports Server (NTRS)
Stambolian Damon; Lawrence, Brad; Stelges, Katrine; Henderson, Gena
2013-01-01
In order to collaborate engineering designs among NASA Centers and customers, to in clude hardware and human activities from multiple remote locations, live human-centered modeling and collaboration across several sites has been successfully facilitated by Kennedy Space Center. The focus of this paper includes innovative a pproaches to engineering design analyses and training, along with research being conducted to apply new technologies for tracking, immersing, and evaluating humans as well as rocket, vehic le, component, or faci lity hardware utilizing high resolution cameras, motion tracking, ergonomic analysis, biomedical monitoring, wor k instruction integration, head-mounted displays, and other innovative human-system integration modeling, simulation, and collaboration applications.
Pendulum Motion in Main Parachute Clusters
NASA Technical Reports Server (NTRS)
Ray, Eric S.; Machin, Ricardo A.
2015-01-01
The coupled dynamics of a cluster of parachutes to a payload are notoriously difficult to predict. Often the payload is designed to be insensitive to the range of attitude and rates that might occur, but spacecraft generally do not have the mass and volume budgeted for this robust of a design. The National Aeronautics and Space Administration (NASA) Orion Capsule Parachute Assembly System (CPAS) implements a cluster of three mains for landing. During testing of the Engineering Development Unit (EDU) design, it was discovered that with a cluster of two mains (a fault tolerance required for human rating) the capsule coupled to the parachute cluster could get into a limit cycle pendulum motion which would exceed the spacecraft landing capability. This pendulum phenomenon could not be predicted with the existing models and simulations. A three phased effort has been undertaken to understand the consequence of the pendulum motion observed, and explore potential design changes that would mitigate this phenomenon. This paper will review the early analysis that was performed of the pendulum motion observed during EDU testing, summarize the analysis ongoing to understand the root cause of the pendulum phenomenon, and discuss the modeling and testing that is being pursued to identify design changes that would mitigate the risk.
The motion commotion: Human factors in transportation
NASA Technical Reports Server (NTRS)
Millar, A. E., Jr. (Editor); Rosen, R. L. (Editor); Gibson, J. D. (Editor); Crum, R. G. (Editor)
1972-01-01
The program for a systems approach to the problem of incorporating human factors in designing transportation systems is summarized. The importance of the human side of transportation is discussed along with the three major factors related to maintaining a mobile and quality life. These factors are (1) people, as individuals and groups, (2) society as a whole, and (3) the natural environment and man-made environs. The problems and bottlenecks are presented along with approaches to their solutions through systems analysis. Specific recommendations essential to achieving improved mobility within environmental constraints are presented.
Hand kinematics of piano playing
Flanders, Martha; Soechting, John F.
2011-01-01
Dexterous use of the hand represents a sophisticated sensorimotor function. In behaviors such as playing the piano, it can involve strong temporal and spatial constraints. The purpose of this study was to determine fundamental patterns of covariation of motion across joints and digits of the human hand. Joint motion was recorded while 5 expert pianists played 30 excerpts from musical pieces, which featured ∼50 different tone sequences and fingering. Principal component analysis and cluster analysis using an expectation-maximization algorithm revealed that joint velocities could be categorized into several patterns, which help to simplify the description of the movements of the multiple degrees of freedom of the hand. For the thumb keystroke, two distinct patterns of joint movement covariation emerged and they depended on the spatiotemporal patterns of the task. For example, the thumb-under maneuver was clearly separated into two clusters based on the direction of hand translation along the keyboard. While the pattern of the thumb joint velocities differed between these clusters, the motions at the metacarpo-phalangeal and proximal-phalangeal joints of the four fingers were more consistent. For a keystroke executed with one of the fingers, there were three distinct patterns of joint rotations, across which motion at the striking finger was fairly consistent, but motion of the other fingers was more variable. Furthermore, the amount of movement spillover of the striking finger to the adjacent fingers was small irrespective of the finger used for the keystroke. These findings describe an unparalleled amount of independent motion of the fingers. PMID:21880938
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Evidence for auditory-visual processing specific to biological motion.
Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F
2012-01-01
Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.
Simon, Sheldon R
2004-12-01
The technology supporting the analysis of human motion has advanced dramatically. Past decades of locomotion research have provided us with significant knowledge about the accuracy of tests performed, the understanding of the process of human locomotion, and how clinical testing can be used to evaluate medical disorders and affect their treatment. Gait analysis is now recognized as clinically useful and financially reimbursable for some medical conditions. Yet, the routine clinical use of gait analysis has seen very limited growth. The issue of its clinical value is related to many factors, including the applicability of existing technology to addressing clinical problems; the limited use of such tests to address a wide variety of medical disorders; the manner in which gait laboratories are organized, tests are performed, and reports generated; and the clinical understanding and expectations of laboratory results. Clinical use is most hampered by the length of time and costs required for performing a study and interpreting it. A "gait" report is lengthy, its data are not well understood, and it includes a clinical interpretation, all of which do not occur with other clinical tests. Current biotechnology research is seeking to address these problems by creating techniques to capture data rapidly, accurately, and efficiently, and to interpret such data by an assortment of modeling, statistical, wave interpretation, and artificial intelligence methodologies. The success of such efforts rests on both our technical abilities and communication between engineers and clinicians.
Human Sensibility Ergonomics Approach to Vehicle Simulator Based on Dynamics
NASA Astrophysics Data System (ADS)
Son, Kwon; Choi, Kyung-Hyun; Yoon, Ji-Sup
Simulators have been used to evaluate drivers' reactions to various transportation products. Most research, however, has concentrated on their technical performance. This paper considers driver's motion perception on a vehicle simulator through the analysis of human sensibility ergonomics. A sensibility ergonomic method is proposed in order to improve the reliability of vehicle simulators. A simulator in a passenger vehicle consists of three main modules such as vehicle dynamics, virtual environment, and motion representation modules. To evaluate drivers' feedback, human perceptions are categorized into a set verbal expressions collected and investigated to find the most appropriate ones for translation and angular accelerations of the simulator. The cut-off frequency of the washout filter in the representation module is selected as one sensibility factor. Sensibility experiments were carried out to find a correlation between the expressions and the cut-off frequency of the filter. This study suggests a methodology to obtain an ergonomic database that can be applied to the sensibility evaluation of dynamic simulators.
Analysis and prediction of meal motion by EMG signals
NASA Astrophysics Data System (ADS)
Horihata, S.; Iwahara, H.; Yano, K.
2007-12-01
The lack of carers for senior citizens and physically handicapped persons in our country has now become a huge issue and has created a great need for carer robots. The usual carer robots (many of which have switches or joysticks for their interfaces), however, are neither easy to use it nor very popular. Therefore, haptic devices have been adopted for a human-machine interface that will enable an intuitive operation. At this point, a method is being tested that seeks to prevent a wrong operation from occurring from the user's signals. This method matches motions with EMG signals.
On the stiffness analysis of a cable driven leg exoskeleton.
Sanjeevi, N S S; Vashista, Vineet
2017-07-01
Robotic systems are being used for gait rehabilitation of patients with neurological disorder. These devices are externally powered to apply external forces on human limbs to assist the leg motion. Patients while walking with these devices adapt their walking pattern in response to the applied forces. The efficacy of a rehabilitation paradigm thus depends on the human-robot interaction. A cable driven leg exoskeleton (CDLE) use actuated cables to apply external joint torques on human leg. Cables are lightweight and flexible but can only be pulled, thus a CDLE requires redundant cables. Redundancy in CDLE can be utilized to appropriately tune a robot's performance. In this work, we present the stiffness analysis of CDLE. Different stiffness performance indices are established to study the role of system parameters in improving the human-robot interaction.
Neural representations of kinematic laws of motion: evidence for action-perception coupling.
Dayan, Eran; Casile, Antonino; Levit-Binnun, Nava; Giese, Martin A; Hendler, Talma; Flash, Tamar
2007-12-18
Behavioral and modeling studies have established that curved and drawing human hand movements obey the 2/3 power law, which dictates a strong coupling between movement curvature and velocity. Human motion perception seems to reflect this constraint. The functional MRI study reported here demonstrates that the brain's response to this law of motion is much stronger and more widespread than to other types of motion. Compliance with this law is reflected in the activation of a large network of brain areas subserving motor production, visual motion processing, and action observation functions. Hence, these results strongly support the notion of similar neural coding for motion perception and production. These findings suggest that cortical motion representations are optimally tuned to the kinematic and geometrical invariants characterizing biological actions.
Indexing and retrieving motions of characters in close contact.
Ho, Edmond S L; Komura, Taku
2009-01-01
Human motion indexing and retrieval are important for animators due to the need to search for motions in the database which can be blended and concatenated. Most of the previous researches of human motion indexing and retrieval compute the Euclidean distance of joint angles or joint positions. Such approaches are difficult to apply for cases in which multiple characters are closely interacting with each other, as the relationships of the characters are not encoded in the representation. In this research, we propose a topology-based approach to index the motions of two human characters in close contact. We compute and encode how the two bodies are tangled based on the concept of rational tangles. The encoded relationships, which we define as TangleList, are used to determine the similarity of the pairs of postures. Using our method, we can index and retrieve motions such as one person piggy-backing another, one person assisting another in walking, and two persons dancing / wrestling. Our method is useful to manage a motion database of multiple characters. We can also produce motion graph structures of two characters closely interacting with each other by interpolating and concatenating topologically similar postures and motion clips, which are applicable to 3D computer games and computer animation.
NASA Astrophysics Data System (ADS)
Kiso, Atsushi; Seki, Hirokazu
This paper describes a method for discriminating of the human forearm motions based on the myoelectric signals using an adaptive fuzzy inference system. In conventional studies, the neural network is often used to estimate motion intention by the myoelectric signals and realizes the high discrimination precision. On the other hand, this study uses the fuzzy inference for a human forearm motion discrimination based on the myoelectric signals. This study designs the membership function and the fuzzy rules using the average value and the standard deviation of the root mean square of the myoelectric potential for every channel of each motion. In addition, the characteristics of the myoelectric potential gradually change as a result of the muscle fatigue. Therefore, the motion discrimination should be performed by taking muscle fatigue into consideration. This study proposes a method to redesign the fuzzy inference system such that dynamic change of the myoelectric potential because of the muscle fatigue will be taken into account. Some experiments carried out using a myoelectric hand simulator show the effectiveness of the proposed motion discrimination method.
A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations.
Gaziv, Guy; Noy, Lior; Liron, Yuvalal; Alon, Uri
2017-01-01
Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available.
A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations
Noy, Lior; Liron, Yuvalal; Alon, Uri
2017-01-01
Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available. PMID:28141861
Motion Analysis System for Instruction of Nihon Buyo using Motion Capture
NASA Astrophysics Data System (ADS)
Shinoda, Yukitaka; Murakami, Shingo; Watanabe, Yuta; Mito, Yuki; Watanuma, Reishi; Marumo, Mieko
The passing on and preserving of advanced technical skills has become an important issue in a variety of fields, and motion analysis using motion capture has recently become popular in the research of advanced physical skills. This research aims to construct a system having a high on-site instructional effect on dancers learning Nihon Buyo, a traditional dance in Japan, and to classify Nihon Buyo dancing according to style, school, and dancer's proficiency by motion analysis. We have been able to study motion analysis systems for teaching Nihon Buyo now that body-motion data can be digitized and stored by motion capture systems using high-performance computers. Thus, with the aim of developing a user-friendly instruction-support system, we have constructed a motion analysis system that displays a dancer's time series of body motions and center of gravity for instructional purposes. In this paper, we outline this instructional motion analysis system based on three-dimensional position data obtained by motion capture. We also describe motion analysis that we performed based on center-of-gravity data obtained by this system and motion analysis focusing on school and age group using this system.
The Vestibular System and Human Dynamic Space Orientation
NASA Technical Reports Server (NTRS)
Meiry, J. L.
1966-01-01
The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.
Victim Simulator for Victim Detection Radar
NASA Technical Reports Server (NTRS)
Lux, James P.; Haque, Salman
2013-01-01
Testing of victim detection radars has traditionally used human subjects who volunteer to be buried in, or climb into a space within, a rubble pile. This is not only uncomfortable, but can be hazardous or impractical when typical disaster scenarios are considered, including fire, mud, or liquid waste. Human subjects are also inconsistent from day to day (i.e., they do not have the same radar properties), so quantitative performance testing is difficult. Finally, testing a multiple-victim scenario is difficult and expensive because of the need for multiple human subjects who must all be coordinated. The solution is an anthropomorphic dummy with dielectric properties that replicate those of a human, and that has motions comparable to human motions for breathing and heartbeat. Two airfilled bladders filled and drained by solenoid valves provide the underlying motion for vinyl bags filled with a dielectric gel with realistic properties. The entire assembly is contained within a neoprene wetsuit serving as a "skin." The solenoids are controlled by a microcontroller, which can generate a variety of heart and breathing patterns, as well as being reprogrammable for more complex activities. Previous electromagnetic simulators or RF phantoms have been oriented towards assessing RF safety, e.g., the measurement of specific absorption rate (SAR) from a cell phone signal, or to provide a calibration target for diagnostic techniques (e.g., MRI). They are optimized for precise dielectric performance, and are typically rigid and immovable. This device is movable and "positionable," and has motion that replicates the small-scale motion of humans. It is soft (much as human tissue is) and has programmable motions.
Modal analysis of the human neck in vivo as a criterion for crash test dummy evaluation
NASA Astrophysics Data System (ADS)
Willinger, R.; Bourdet, N.; Fischer, R.; Le Gall, F.
2005-10-01
Low speed rear impact remains an acute automative safety problem because of a lack of knowledge of the mechanical behaviour of the human neck early after impact. Poorly validated mathematical models of the human neck or crash test dummy necks make it difficult to optimize automotive seats and head rests. In this study we have constructed an experimental and theoretical modal analysis of the human head-neck system in the sagittal plane. The method has allowed us to identify the mechanical properties of the neck and to validate a mathematical model in the frequency domain. The extracted modal characteristics consist of a first natural frequency at 1.3±0.1 Hz associated with head flexion-extension motion and a second mode at 8±0.7 Hz associated with antero-posterior translation of the head, also called retraction motion. Based on this new validation parameters we have been able to compare the human and crash test dummy frequency response functions and to evaluate their biofidelity. Three head-neck systems of current test dummies dedicated for use in rear-end car crash accident investigations have been evaluated in the frequency domain. We did not consider any to be acceptable, either because of excessive rigidity of their flexion-extension mode or because they poorly reproduce the head translation mode. In addition to dummy evaluation, this study provides new insight into injury mechanisms when a given natural frequency can be linked to a specific neck deformation.
A triboelectric motion sensor in wearable body sensor network for human activity recognition.
Hui Huang; Xian Li; Ye Sun
2016-08-01
The goal of this study is to design a novel triboelectric motion sensor in wearable body sensor network for human activity recognition. Physical activity recognition is widely used in well-being management, medical diagnosis and rehabilitation. Other than traditional accelerometers, we design a novel wearable sensor system based on triboelectrification. The triboelectric motion sensor can be easily attached to human body and collect motion signals caused by physical activities. The experiments are conducted to collect five common activity data: sitting and standing, walking, climbing upstairs, downstairs, and running. The k-Nearest Neighbor (kNN) clustering algorithm is adopted to recognize these activities and validate the feasibility of this new approach. The results show that our system can perform physical activity recognition with a successful rate over 80% for walking, sitting and standing. The triboelectric structure can also be used as an energy harvester for motion harvesting due to its high output voltage in random low-frequency motion.
“What Women Like”: Influence of Motion and Form on Esthetic Body Perception
Cazzato, Valentina; Siega, Serena; Urgesi, Cosimo
2012-01-01
Several studies have shown the distinct contribution of motion and form to the esthetic evaluation of female bodies. Here, we investigated how variations of implied motion and body size interact in the esthetic evaluation of female and male bodies in a sample of young healthy women. Participants provided attractiveness, beauty, and liking ratings for the shape and posture of virtual renderings of human bodies with variable body size and implied motion. The esthetic judgments for both shape and posture of human models were influenced by body size and implied motion, with a preference for thinner and more dynamic stimuli. Implied motion, however, attenuated the impact of extreme body size on the esthetic evaluation of body postures, while body size variations did not affect the preference for more dynamic stimuli. Results show that body form and action cues interact in esthetic perception, but the final esthetic appreciation of human bodies is predicted by a mixture of perceptual and affective evaluative components. PMID:22866044
Auto-tracking system for human lumbar motion analysis.
Sui, Fuge; Zhang, Da; Lam, Shing Chun Benny; Zhao, Lifeng; Wang, Dongjun; Bi, Zhenggang; Hu, Yong
2011-01-01
Previous lumbar motion analyses suggest the usefulness of quantitatively characterizing spine motion. However, the application of such measurements is still limited by the lack of user-friendly automatic spine motion analysis systems. This paper describes an automatic analysis system to measure lumbar spine disorders that consists of a spine motion guidance device, an X-ray imaging modality to acquire digitized video fluoroscopy (DVF) sequences and an automated tracking module with a graphical user interface (GUI). DVF sequences of the lumbar spine are recorded during flexion-extension under a guidance device. The automatic tracking software utilizing a particle filter locates the vertebra-of-interest in every frame of the sequence, and the tracking result is displayed on the GUI. Kinematic parameters are also extracted from the tracking results for motion analysis. We observed that, in a bone model test, the maximum fiducial error was 3.7%, and the maximum repeatability error in translation and rotation was 1.2% and 2.6%, respectively. In our simulated DVF sequence study, the automatic tracking was not successful when the noise intensity was greater than 0.50. In a noisy situation, the maximal difference was 1.3 mm in translation and 1° in the rotation angle. The errors were calculated in translation (fiducial error: 2.4%, repeatability error: 0.5%) and in the rotation angle (fiducial error: 1.0%, repeatability error: 0.7%). However, the automatic tracking software could successfully track simulated sequences contaminated by noise at a density ≤ 0.5 with very high accuracy, providing good reliability and robustness. A clinical trial with 10 healthy subjects and 2 lumbar spondylolisthesis patients were enrolled in this study. The measurement with auto-tacking of DVF provided some information not seen in the conventional X-ray. The results proposed the potential use of the proposed system for clinical applications.
Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design
NASA Technical Reports Server (NTRS)
Newman, Dava
2003-01-01
The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.
The responsiveness of biological motion processing areas to selective attention towards goals.
Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert
2012-10-15
A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas-particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated a portion of left hMT+/EBA only during the perception of purposeful movement-consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. Copyright © 2012 Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Motions. 79.28 Section 79.28 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION PROGRAM FRAUD CIVIL REMEDIES § 79.28 Motions. (a) Any application to the ALJ for an order or ruling shall be by motion. Motions shall state the...
Holowka, Nicholas B; O'Neill, Matthew C; Thompson, Nathan E; Demes, Brigitte
2017-03-01
The longitudinal arch of the human foot is commonly thought to reduce midfoot joint motion to convert the foot into a rigid lever during push off in bipedal walking. In contrast, African apes have been observed to exhibit midfoot dorsiflexion following heel lift during terrestrial locomotion, presumably due to their possession of highly mobile midfoot joints. This assumed dichotomy between human and African ape midfoot mobility has recently been questioned based on indirect assessments of in vivo midfoot motion, such as plantar pressure and cadaver studies; however, direct quantitative analyses of African ape midfoot kinematics during locomotion remain scarce. Here, we used high-speed motion capture to measure three-dimensional foot kinematics in two male chimpanzees and five male humans walking bipedally at similar dimensionless speeds. We analyzed 10 steps per chimpanzee subject and five steps per human subject, and compared ranges of midfoot motion between species over stance phase, as well as within double- and single-limb support periods. Contrary to expectations, humans used a greater average range of midfoot motion than chimpanzees over the full duration of stance. This difference was driven by humans' dramatic plantarflexion and adduction of the midfoot joints during the second double-limb support period, which likely helps the foot generate power during push off. However, chimpanzees did use slightly but significantly more midfoot dorsiflexion than humans in the single limb-support period, during which heel lift begins. These results indicate that both stiffness and mobility are important to longitudinal arch function, and that the human foot evolved to utilize both during push off in bipedal walking. Thus, the presence of human-like midfoot joint morphology in fossil hominins should not be taken as indicating foot rigidity, but may signify the evolution of pedal anatomy conferring enhanced push off mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yu, Xiao-Guang; Li, Yuan-Qing; Zhu, Wei-Bin; Huang, Pei; Wang, Tong-Tong; Hu, Ning; Fu, Shao-Yun
2017-05-25
Melamine sponge, also known as nano-sponge, is widely used as an abrasive cleaner in our daily life. In this work, the fabrication of a wearable strain sensor for human motion detection is first demonstrated with a commercially available nano-sponge as a starting material. The key resistance sensitive material in the wearable strain sensor is obtained by the encapsulation of a carbonized nano-sponge (CNS) with silicone resin. The as-fabricated CNS/silicone sensor is highly sensitive to strain with a maximum gauge factor of 18.42. In addition, the CNS/silicone sensor exhibits a fast and reliable response to various cyclic loading within a strain range of 0-15% and a loading frequency range of 0.01-1 Hz. Finally, the CNS/silicone sensor as a wearable device for human motion detection including joint motion, eye blinking, blood pulse and breathing is demonstrated by attaching the sensor to the corresponding parts of the human body. In consideration of the simple fabrication technique, low material cost and excellent strain sensing performance, the CNS/silicone sensor is believed to have great potential in the next-generation of wearable devices for human motion detection.
Sociability modifies dogs' sensitivity to biological motion of different social relevance.
Ishikawa, Yuko; Mills, Daniel; Willmott, Alexander; Mullineaux, David; Guo, Kun
2018-03-01
Preferential attention to living creatures is believed to be an intrinsic capacity of the visual system of several species, with perception of biological motion often studied and, in humans, it correlates with social cognitive performance. Although domestic dogs are exceptionally attentive to human social cues, it is unknown whether their sociability is associated with sensitivity to conspecific and heterospecific biological motion cues of different social relevance. We recorded video clips of point-light displays depicting a human or dog walking in either frontal or lateral view. In a preferential looking paradigm, dogs spontaneously viewed 16 paired point-light displays showing combinations of normal/inverted (control condition), human/dog and frontal/lateral views. Overall, dogs looked significantly longer at frontal human point-light display versus the inverted control, probably due to its clearer social/biological relevance. Dogs' sociability, assessed through owner-completed questionnaires, further revealed that low-sociability dogs preferred the lateral point-light display view, whereas high-sociability dogs preferred the frontal view. Clearly, dogs can recognize biological motion, but their preference is influenced by their sociability and the stimulus salience, implying biological motion perception may reflect aspects of dogs' social cognition.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Parvocellular Pathway Impairment in Autism Spectrum Disorder: Evidence from Visual Evoked Potentials
ERIC Educational Resources Information Center
Fujita, Takako; Yamasaki, Takao; Kamio, Yoko; Hirose, Shinichi; Tobimatsu, Shozo
2011-01-01
In humans, visual information is processed via parallel channels: the parvocellular (P) pathway analyzes color and form information, whereas the magnocellular (M) stream plays an important role in motion analysis. Individuals with autism spectrum disorder (ASD) often show superior performance in processing fine detail, but impaired performance in…
NASA Technical Reports Server (NTRS)
Baron, S.; Lancraft, R.; Zacharias, G.
1980-01-01
The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.
NASA Technical Reports Server (NTRS)
1997-01-01
Session TP3 includes short reports on: (1) Modification of Goal-Directed Arm Movements During Inflight Adaptation to Microgravity; (2) Quantitative Analysis of Motion control in Long Term Microgravity; (3) Does the Centre of Gravity Remain the Stabilised Reference during Complex Human Postural Equilibrium Tasks in Weightlessness?; and (4) Arm End-Point Trajectories Under Normal and Microgravity Environments.
Postures and Motions Library Development for Verification of Ground Crew Human Factors Requirements
NASA Technical Reports Server (NTRS)
Stambolian, Damon; Henderson, Gena; Jackson, Mariea Dunn; Dischinger, Charles
2013-01-01
Spacecraft and launch vehicle ground processing activities require a variety of unique human activities. These activities are being documented in a primitive motion capture library. The library will be used by human factors engineering analysts to infuse real to life human activities into the CAD models to verify ground systems human factors requirements. As the primitive models are being developed for the library, the project has selected several current human factors issues to be addressed for the Space Launch System (SLS) and Orion launch systems. This paper explains how the motion capture of unique ground systems activities is being used to verify the human factors engineering requirements for ground systems used to process the SLS and Orion vehicles, and how the primitive models will be applied to future spacecraft and launch vehicle processing.
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring
Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen
2018-01-01
Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851
Control of joint motion simulators for biomechanical research
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.
1992-01-01
The authors present a hierarchical adaptive algorithm for controlling upper extremity human joint motion simulators. A joint motion simulator is a computer-controlled, electromechanical system which permits the application of forces to the tendons of a human cadaver specimen in such a way that the cadaver joint under study achieves a desired motion in a physiologic manner. The proposed control scheme does not require knowledge of the cadaver specimen dynamic model, and solves on-line the indeterminate problem which arises because human joints typically possess more actuators than degrees of freedom. Computer simulation results are given for an elbow/forearm system and wrist/hand system under hierarchical control. The results demonstrate that any desired normal joint motion can be accurately tracked with the proposed algorithm. These simulation results indicate that the controller resolved the indeterminate problem redundancy in a physiologic manner, and show that the control scheme was robust to parameter uncertainty and to sensor noise.
Observation and imitation of actions performed by humans, androids, and robots: an EMG study
Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.
2015-01-01
Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782
Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.
Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus
2013-12-01
Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.
Statistical data mining of streaming motion data for fall detection in assistive environments.
Tasoulis, S K; Doukas, C N; Maglogiannis, I; Plagianakos, V P
2011-01-01
The analysis of human motion data is interesting for the purpose of activity recognition or emergency event detection, especially in the case of elderly or disabled people living independently in their homes. Several techniques have been proposed for identifying such distress situations using either motion, audio or video sensors on the monitored subject (wearable sensors) or the surrounding environment. The output of such sensors is data streams that require real time recognition, especially in emergency situations, thus traditional classification approaches may not be applicable for immediate alarm triggering or fall prevention. This paper presents a statistical mining methodology that may be used for the specific problem of real time fall detection. Visual data captured from the user's environment, using overhead cameras along with motion data are collected from accelerometers on the subject's body and are fed to the fall detection system. The paper includes the details of the stream data mining methodology incorporated in the system along with an initial evaluation of the achieved accuracy in detecting falls.
ERIC Educational Resources Information Center
Wollner, Clemens; Deconinck, Frederik J. A.; Parkinson, Jim; Hove, Michael J.; Keller, Peter E.
2012-01-01
Aesthetic theories have long suggested perceptual advantages for prototypical exemplars of a given class of objects or events. Empirical evidence confirmed that morphed (quantitatively averaged) human faces, musical interpretations, and human voices are preferred over most individual ones. In this study, biological human motion was morphed and…
Time-lapse imaging of human heart motion with switched array UWB radar.
Brovoll, Sverre; Berger, Tor; Paichard, Yoann; Aardal, Øyvind; Lande, Tor Sverre; Hamran, Svein-Erik
2014-10-01
Radar systems for detection of human heartbeats have mostly been single-channel systems with limited spatial resolution. In this paper, a radar system for ultra-wideband (UWB) imaging of the human heart is presented. To make the radar waves penetrate the human tissue the antenna is placed very close to the body. The antenna is an array with eight elements, and an antenna switch system connects the radar to the individual elements in sequence to form an image. Successive images are used to build up time-lapse movies of the beating heart. Measurements on a human test subject are presented and the heart motion is estimated at different locations inside the body. The movies show rhythmic motion consistent with the beating heart, and the location and shape of the reflections correspond well with the expected response form the heart wall. The spatial dependent heart motion is compared to ECG recordings, and it is confirmed that heartbeat modulations are seen in the radar data. This work shows that radar imaging of the human heart may provide valuable information on the mechanical movement of the heart.
Model of human visual-motion sensing
NASA Technical Reports Server (NTRS)
Watson, A. B.; Ahumada, A. J., Jr.
1985-01-01
A model of how humans sense the velocity of moving images is proposed. The model exploits constraints provided by human psychophysics, notably that motion-sensing elements appear tuned for two-dimensional spatial frequency, and by the frequency spectrum of a moving image, namely, that its support lies in the plane in which the temporal frequency equals the dot product of the spatial frequency and the image velocity. The first stage of the model is a set of spatial-frequency-tuned, direction-selective linear sensors. The temporal frequency of the response of each sensor is shown to encode the component of the image velocity in the sensor direction. At the second stage, these components are resolved in order to measure the velocity of image motion at each of a number of spatial locations and spatial frequencies. The model has been applied to several illustrative examples, including apparent motion, coherent gratings, and natural image sequences. The model agrees qualitatively with human perception.
Exploitation of Ubiquitous Wi-Fi Devices as Building Blocks for Improvised Motion Detection Systems.
Soldovieri, Francesco; Gennarelli, Gianluca
2016-02-27
This article deals with a feasibility study on the detection of human movements in indoor scenarios based on radio signal strength variations. The sensing principle exploits the fact that the human body interacts with wireless signals, introducing variations of the radiowave fields due to shadowing and multipath phenomena. As a result, human motion can be inferred from fluctuations of radiowave power collected by a receiving terminal. In this paper, we investigate the potentialities of widely available wireless communication devices in order to develop an improvised motion detection system (IMDS). Experimental tests are performed in an indoor environment by using a smartphone as a Wi-Fi access point and a laptop with dedicated software as a receiver. Simple detection strategies tailored for real-time operation are implemented to process the received signal strength measurements. The achieved results confirm the potentialities of the simple system here proposed to reliably detect human motion in operational conditions.
Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
NASA Astrophysics Data System (ADS)
Lee, JaeBeom; Eleftheriadis, Alexandros
1997-01-01
We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.
Mathematical modelling of animate and intentional motion.
Rittscher, Jens; Blake, Andrew; Hoogs, Anthony; Stein, Gees
2003-01-01
Our aim is to enable a machine to observe and interpret the behaviour of others. Mathematical models are employed to describe certain biological motions. The main challenge is to design models that are both tractable and meaningful. In the first part we will describe how computer vision techniques, in particular visual tracking, can be applied to recognize a small vocabulary of human actions in a constrained scenario. Mainly the problems of viewpoint and scale invariance need to be overcome to formalize a general framework. Hence the second part of the article is devoted to the question whether a particular human action should be captured in a single complex model or whether it is more promising to make extensive use of semantic knowledge and a collection of low-level models that encode certain motion primitives. Scene context plays a crucial role if we intend to give a higher-level interpretation rather than a low-level physical description of the observed motion. A semantic knowledge base is used to establish the scene context. This approach consists of three main components: visual analysis, the mapping from vision to language and the search of the semantic database. A small number of robust visual detectors is used to generate a higher-level description of the scene. The approach together with a number of results is presented in the third part of this article. PMID:12689374
Video quality assessment method motivated by human visual perception
NASA Astrophysics Data System (ADS)
He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng
2016-11-01
Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator
NASA Astrophysics Data System (ADS)
Rehmatullah, Faizan
In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.
Development of a CPM Machine for Injured Fingers.
Fu, Yili; Zhang, Fuxiang; Ma, Xin; Meng, Qinggang
2005-01-01
Human fingers are easy to be injured. A CPM machine is a mechanism based on the rehabilitation theory of continuous passive motion (CPM). To develop a CPM machine for the clinic application in the rehabilitation of injured fingers is a significant task. Therefore, based on the theories of evidence based medicine (EBM) and CPM, we've developed a set of biomimetic mechanism after modeling the motions of fingers and analyzing its kinematics and dynamics analysis. We also design an embedded operating system based on ARM (a kind of 32-bit RISC microprocessor). The equipment can achieve the precise control of moving scope of fingers, finger's force and speed. It can serves as a rational checking method and a way of assessment for functional rehabilitation of human hands. Now, the first prototype has been finished and will start the clinical testing in Harbin Medical University shortly.
Microsoft Kinect Sensor Evaluation
NASA Technical Reports Server (NTRS)
Billie, Glennoah
2011-01-01
My summer project evaluates the Kinect game sensor input/output and its suitability to perform as part of a human interface for a spacecraft application. The primary objective is to evaluate, understand, and communicate the Kinect system's ability to sense and track fine (human) position and motion. The project will analyze the performance characteristics and capabilities of this game system hardware and its applicability for gross and fine motion tracking. The software development kit for the Kinect was also investigated and some experimentation has begun to understand its development environment. To better understand the software development of the Kinect game sensor, research in hacking communities has brought a better understanding of the potential for a wide range of personal computer (PC) application development. The project also entails the disassembly of the Kinect game sensor. This analysis would involve disassembling a sensor, photographing it, and identifying components and describing its operation.
Jammed Humans in High-Density Crowd Disasters
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Sumpter, David; Silverberg, Jesse
When people gather in large groups like those found at Black Friday sales events, pilgrimages, heavy metal concerts, and parades, crowd density often becomes exceptionally high. As a consequence, these events can produce tragic outcomes such as stampedes and ''crowd crushes''. While human collective motion has been studied with active particle simulations, the underlying mechanisms for emergent behavior are less well understood. Here, we use techniques developed to study jammed granular materials to analyze an active matter model inspired by large groups of people gathering at a point of common interest. In the model, a single behavioral rule combined with body-contact interactions are sufficient for the emergence of a self-confined steady state, where particles fluctuate around a stable position. Applying mode analysis to this system, we find evidence for Goldstone modes, soft spots, and stochastic resonance, which may be the preferential mechanisms for dangerous emergent collective motions in crowds.
NASA Astrophysics Data System (ADS)
Wang, Siqi; Li, Decai
2015-09-01
This paper describes the design and characterization of a plane vibration-based electromagnetic generator that is capable of converting low-frequency vibration energy into electrical energy. A magnetic spring is formed by a magnetic attractive force between fixed and movable permanent magnets. The ferrofluid is employed on the bottom of the movable permanent magnet to suspend it and reduce the mechanical damping as a fluid lubricant. When the electromagnetic generator with a ferrofluid of 0.3 g was operated under a resonance condition, the output power reached 0.27 mW, and the power density of the electromagnetic generator was 5.68 µW/cm2. The electromagnetic generator was also used to harvest energy from human motion. The measured average load powers of the electromagnetic generator from human waist motion were 0.835 mW and 1.3 mW during walking and jogging, respectively.
Maclachlan, Liam; White, Steven G; Reid, Duncan
2015-08-01
Functional assessments are conducted in both clinical and athletic settings in an attempt to identify those individuals who exhibit movement patterns that may increase their risk of non-contact injury. In place of highly sophisticated three-dimensional motion analysis, functional testing can be completed through observation. To evaluate the validity of movement observation assessments by summarizing the results of articles comparing human observation in real-time or video play-back and three-dimensional motion analysis of lower extremity kinematics during functional screening tests. Systematic review. A computerized systematic search was conducted through Medline, SPORTSdiscus, Scopus, Cinhal, and Cochrane health databases between February and April of 2014. Validity studies comparing human observation (real-time or video play-back) to three-dimensional motion analysis of functional tasks were selected. Only studies comprising uninjured, healthy subjects conducting lower extremity functional assessments were appropriate for review. Eligible observers were certified health practitioners or qualified members of sports and athletic training teams that conduct athlete screening. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) was used to appraise the literature. Results are presented in terms of functional tasks. Six studies met the inclusion criteria. Across these studies, two-legged squats, single-leg squats, drop-jumps, and running and cutting manoeuvres were the functional tasks analysed. When compared to three-dimensional motion analysis, observer ratings of lower extremity kinematics, such as knee position in relation to the foot, demonstrated mixed results. Single-leg squats achieved target sensitivity values (≥ 80%) but not specificity values (≥ 50%>%). Drop-jump task agreement ranged from poor (< 50%) to excellent (> 80%). Two-legged squats achieved 88% sensitivity and 85% specificity. Mean underestimations as large as 198 (peak knee flexion) were found in the results of those assessing running and side-step cutting manoeuvres. Variables such as the speed of movement, the methods of rating, the profiles of participants and the experience levels of observers may have influenced the outcomes of functional testing. The small number of studies used limits generalizability. Furthermore, this review used two dimensional video-playback for the majority of observations. If the movements had been rated in real-time three dimensional video, the results may have been different. Slower, speed controlled movements using dichotomous ratings reach target sensitivity and demonstrate higher overall levels of agreement. As a result, their utilization in functional screening is advocated. 1A.
Human motion behavior while interacting with an industrial robot.
Bortot, Dino; Ding, Hao; Antonopolous, Alexandros; Bengler, Klaus
2012-01-01
Human workers and industrial robots both have specific strengths within industrial production. Advantageously they complement each other perfectly, which leads to the development of human-robot interaction (HRI) applications. Bringing humans and robots together in the same workspace may lead to potential collisions. The avoidance of such is a central safety requirement. It can be realized with sundry sensor systems, all of them decelerating the robot when the distance to the human decreases alarmingly and applying the emergency stop, when the distance becomes too small. As a consequence, the efficiency of the overall systems suffers, because the robot has high idle times. Optimized path planning algorithms have to be developed to avoid that. The following study investigates human motion behavior in the proximity of an industrial robot. Three different kinds of encounters between the two entities under three robot speed levels are prompted. A motion tracking system is used to capture the motions. Results show, that humans keep an average distance of about 0,5m to the robot, when the encounter occurs. Approximation of the workbenches is influenced by the robot in ten of 15 cases. Furthermore, an increase of participants' walking velocity with higher robot velocities is observed.
Dual-body magnetic helical robot for drilling and cargo delivery in human blood vessels
NASA Astrophysics Data System (ADS)
Lee, Wonseo; Jeon, Seungmun; Nam, Jaekwang; Jang, Gunhee
2015-05-01
We propose a novel dual-body magnetic helical robot (DMHR) manipulated by a magnetic navigation system. The proposed DMHR can generate helical motions to navigate in human blood vessels and to drill blood clots by an external rotating magnetic field. It can also generate release motions which are relative rotational motions between dual-bodies to release the carrying cargos to a target region by controlling the magnitude of an external magnetic field. Constraint equations were derived to selectively manipulate helical and release motions by controlling external magnetic fields. The DMHR was prototyped and various experiments were conducted to demonstrate its motions and verify its manipulation methods.
Dynamic Stimuli And Active Processing In Human Visual Perception
NASA Astrophysics Data System (ADS)
Haber, Ralph N.
1990-03-01
Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.
Adaptive control of center of mass (global) motion and its joint (local) origin in gait.
Yang, Feng; Pai, Yi-Chung
2014-08-22
Dynamic gait stability can be quantified by the relationship of the motion state (i.e. the position and velocity) between the body center of mass (COM) and its base of support (BOS). Humans learn how to adaptively control stability by regulating the absolute COM motion state (i.e. its position and velocity) and/or by controlling the BOS (through stepping) in a predictable manner, or by doing both simultaneously following an external perturbation that disrupts their regular relationship. Post repeated-slip perturbation training, for instance, older adults learned to forward shift their COM position while walking with a reduced step length, hence reduced their likelihood of slip-induced falls. How and to what extent each individual joint influences such adaptive alterations is mostly unknown. A three-dimensional individualized human kinematic model was established. Based on the human model, sensitivity analysis was used to systematically quantify the influence of each lower limb joint on the COM position relative to the BOS and the step length during gait. It was found that the leading foot had the greatest effect on regulating the COM position relative to the BOS; and both hips bear the most influence on the step length. These findings could guide cost-effective but efficient fall-reduction training paradigm among older population. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cereatti, Andrea; Bonci, Tecla; Akbarshahi, Massoud; Aminian, Kamiar; Barré, Arnaud; Begon, Mickael; Benoit, Daniel L; Charbonnier, Caecilia; Dal Maso, Fabien; Fantozzi, Silvia; Lin, Cheng-Chung; Lu, Tung-Wu; Pandy, Marcus G; Stagni, Rita; van den Bogert, Antonie J; Camomilla, Valentina
2017-09-06
Soft tissue artefact (STA) represents one of the main obstacles for obtaining accurate and reliable skeletal kinematics from motion capture. Many studies have addressed this issue, yet there is no consensus on the best available bone pose estimator and the expected errors associated with relevant results. Furthermore, results obtained by different authors are difficult to compare due to the high variability and specificity of the phenomenon and the different metrics used to represent these data. Therefore, the aim of this study was twofold: firstly, to propose standards for description of STA; and secondly, to provide illustrative STA data samples for body segments in the upper and lower extremities and for a range of motor tasks specifically, level walking, stair ascent, sit-to-stand, hip- and knee-joint functional movements, cutting motion, running, hopping, arm elevation and functional upper-limb movements. The STA dataset includes motion of the skin markers measured in vivo and ex vivo using stereophotogrammetry as well as motion of the underlying bones measured using invasive or bio-imaging techniques (i.e., X-ray fluoroscopy or MRI). The data are accompanied by a detailed description of the methods used for their acquisition, with information given about their quality as well as characterization of the STA using the proposed standards. The availability of open-access and standard-format STA data will be useful for the evaluation and development of bone pose estimators thus contributing to the advancement of three-dimensional human movement analysis and its translation into the clinical practice and other applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Head motion during MRI acquisition reduces gray matter volume and thickness estimates.
Reuter, Martin; Tisdall, M Dylan; Qureshi, Abid; Buckner, Randy L; van der Kouwe, André J W; Fischl, Bruce
2015-02-15
Imaging biomarkers derived from magnetic resonance imaging (MRI) data are used to quantify normal development, disease, and the effects of disease-modifying therapies. However, motion during image acquisition introduces image artifacts that, in turn, affect derived markers. A systematic effect can be problematic since factors of interest like age, disease, and treatment are often correlated with both a structural change and the amount of head motion in the scanner, confounding the ability to distinguish biology from artifact. Here we evaluate the effect of head motion during image acquisition on morphometric estimates of structures in the human brain using several popular image analysis software packages (FreeSurfer 5.3, VBM8 SPM, and FSL Siena 5.0.7). Within-session repeated T1-weighted MRIs were collected on 12 healthy volunteers while performing different motion tasks, including two still scans. We show that volume and thickness estimates of the cortical gray matter are biased by head motion with an average apparent volume loss of roughly 0.7%/mm/min of subject motion. Effects vary across regions and remain significant after excluding scans that fail a rigorous quality check. In view of these results, the interpretation of reported morphometric effects of movement disorders or other conditions with increased motion tendency may need to be revisited: effects may be overestimated when not controlling for head motion. Furthermore, drug studies with hypnotic, sedative, tranquilizing, or neuromuscular-blocking substances may contain spurious "effects" of reduced atrophy or brain growth simply because they affect motion distinct from true effects of the disease or therapeutic process. Copyright © 2014 Elsevier Inc. All rights reserved.
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
ERIC Educational Resources Information Center
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception…
Trunk posture monitoring with inertial sensors
Wong, Man Sang
2008-01-01
Measurement of human posture and movement is an important area of research in the bioengineering and rehabilitation fields. Various attempts have been initiated for different clinical application goals, such as diagnosis of pathological posture and movements, assessment of pre- and post-treatment efficacy and comparison of different treatment protocols. Image-based methods for measurements of human posture and movements have been developed, such as the radiography, photogrammetry, optoelectric technique and video analysis. However, it is found that these methods are complicated to set up, time-consuming to operate and could only be applied in laboratory environments. This study introduced a method of using a posture monitoring system in estimating the spinal curvature changes during trunk movements on the sagittal and coronal planes and providing trunk posture monitoring during daily activities. The system consisted of three sensor modules, each with one tri-axial accelerometer and three uni-axial gyroscopes orthogonally aligned, and a digital data acquisition and feedback system. The accuracy of this system was tested with a motion analysis system (Vicon 370) in calibration with experimental setup and in trunk posture measurement with nine human subjects, and the performance of the posture monitoring system during daily activities with two human subjects was reported. The averaged root mean squared differences between the measurements of the system and motion analysis system were found to be <1.5° in dynamic calibration, and <3.1° for the sagittal plane and ≤2.1° for the coronal plane in estimation of the trunk posture change during trunk movements. The measurements of the system and the motion analysis system was highly correlated (>0.999 for dynamic calibration and >0.829 for estimation of spinal curvature change in domain planes of movement during flexion and lateral bending). With the sensing modules located on the upper trunk, mid-trunk and the pelvic levels, the inclination of trunk segment and the change of spinal curvature in trunk movements could be estimated. The posture information of five subjects was recorded at 30 s intervals during daily activity over a period of 3 days and 2 h a day. The preliminary results demonstrated that the subjects could improve their posture when feedback signals were provided. The posture monitoring system could be used for the purpose of posture monitoring during daily activity. PMID:18196296
Trunk posture monitoring with inertial sensors.
Wong, Wai Yin; Wong, Man Sang
2008-05-01
Measurement of human posture and movement is an important area of research in the bioengineering and rehabilitation fields. Various attempts have been initiated for different clinical application goals, such as diagnosis of pathological posture and movements, assessment of pre- and post-treatment efficacy and comparison of different treatment protocols. Image-based methods for measurements of human posture and movements have been developed, such as the radiography, photogrammetry, optoelectric technique and video analysis. However, it is found that these methods are complicated to set up, time-consuming to operate and could only be applied in laboratory environments. This study introduced a method of using a posture monitoring system in estimating the spinal curvature changes during trunk movements on the sagittal and coronal planes and providing trunk posture monitoring during daily activities. The system consisted of three sensor modules, each with one tri-axial accelerometer and three uni-axial gyroscopes orthogonally aligned, and a digital data acquisition and feedback system. The accuracy of this system was tested with a motion analysis system (Vicon 370) in calibration with experimental setup and in trunk posture measurement with nine human subjects, and the performance of the posture monitoring system during daily activities with two human subjects was reported. The averaged root mean squared differences between the measurements of the system and motion analysis system were found to be < 1.5 degrees in dynamic calibration, and < 3.1 degrees for the sagittal plane and < or = 2.1 degrees for the coronal plane in estimation of the trunk posture change during trunk movements. The measurements of the system and the motion analysis system was highly correlated (> 0.999 for dynamic calibration and > 0.829 for estimation of spinal curvature change in domain planes of movement during flexion and lateral bending). With the sensing modules located on the upper trunk, mid-trunk and the pelvic levels, the inclination of trunk segment and the change of spinal curvature in trunk movements could be estimated. The posture information of five subjects was recorded at 30 s intervals during daily activity over a period of 3 days and 2 h a day. The preliminary results demonstrated that the subjects could improve their posture when feedback signals were provided. The posture monitoring system could be used for the purpose of posture monitoring during daily activity.
Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?
Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.
2016-01-01
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676
Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?
Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H
2016-12-15
Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.
A method of depth image based human action recognition
NASA Astrophysics Data System (ADS)
Li, Pei; Cheng, Wanli
2017-05-01
In this paper, we propose an action recognition algorithm framework based on human skeleton joint information. In order to extract the feature of human motion, we use the information of body posture, speed and acceleration of movement to construct spatial motion feature that can describe and reflect the joint. On the other hand, we use the classical temporal pyramid matching algorithm to construct temporal feature and describe the motion sequence variation from different time scales. Then, we use bag of words to represent these actions, which is to present every action in the histogram by clustering these extracted feature. Finally, we employ Hidden Markov Model to train and test the extracted motion features. In the experimental part, the correctness and effectiveness of the proposed model are comprehensively verified on two well-known datasets.
Real-time animation software for customized training to use motor prosthetic systems.
Davoodi, Rahman; Loeb, Gerald E
2012-03-01
Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test neural prostheses for patients with paralysis or amputations.
Time Average Holography Study of Human Tympanic Membrane with Altered Middle Ear Ossicular Chain
NASA Astrophysics Data System (ADS)
Cheng, Jeffrey T.; Ravicz, Michael E.; Rosowski, John J.; Hulli, Nesim; Hernandez-Montes, Maria S.; Furlong, Cosme
2009-02-01
Computer-assisted time average holographic interferometry was used to study the vibration of the human tympanic membrane (TM) in cadaveric temporal bones before and after alterations of the ossicular chain. Simultaneous laser Doppler vibrometer measurements of stapes velocity were performed to estimate the conductive hearing loss caused by ossicular alterations. The quantified TM motion described from holographic images was correlated with stapes velocity to define relations between TM motion and stapes velocity in various ossicular disorders. The results suggest that motions of the TM are relatively uncoupled from stapes motion at frequencies above 1000 Hz.
Walking through the Impulse-Momentum Theorem
ERIC Educational Resources Information Center
Haugland, Ole Anton
2013-01-01
Modern force platforms are handy tools for investigating forces during human motion. Earlier they were very expensive and were mostly used in research laboratories. But now even platforms that can measure in two directions are quite affordable. In this work we used the PASCO 2-Axis Force Platform. The analysis of the data can serve as a nice…
ERIC Educational Resources Information Center
SEIBERT, WARREN F.; AND OTHERS
PRELIMINARY ANALYSES WERE UNDERTAKEN TO DETERMINE THE POTENTIAL CONTRIBUTION OF MOTION PICTURE FILMS TO FACTOR ANALYTIC STUDIES OF HUMAN INTELLECT. OF PRIMARY CONCERN WERE THE OPERATIONS OF COGNITION AND MEMORY, FORMING TWO OF THE FIVE OPERATION COLUMNS OF GUILFORD'S "STRUCTURE OF INTELLECT." THE CORE REFERENCE FOR THE STUDY WAS DEFINED…
Recognizing human activities using appearance metric feature and kinematics feature
NASA Astrophysics Data System (ADS)
Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye
2017-05-01
The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.
Lahnakoski, Juha M; Glerean, Enrico; Salmi, Juha; Jääskeläinen, Iiro P; Sams, Mikko; Hari, Riitta; Nummenmaa, Lauri
2012-01-01
Despite the abundant data on brain networks processing static social signals, such as pictures of faces, the neural systems supporting social perception in naturalistic conditions are still poorly understood. Here we delineated brain networks subserving social perception under naturalistic conditions in 19 healthy humans who watched, during 3-T functional magnetic resonance imaging (fMRI), a set of 137 short (approximately 16 s each, total 27 min) audiovisual movie clips depicting pre-selected social signals. Two independent raters estimated how well each clip represented eight social features (faces, human bodies, biological motion, goal-oriented actions, emotion, social interaction, pain, and speech) and six filler features (places, objects, rigid motion, people not in social interaction, non-goal-oriented action, and non-human sounds) lacking social content. These ratings were used as predictors in the fMRI analysis. The posterior superior temporal sulcus (STS) responded to all social features but not to any non-social features, and the anterior STS responded to all social features except bodies and biological motion. We also found four partially segregated, extended networks for processing of specific social signals: (1) a fronto-temporal network responding to multiple social categories, (2) a fronto-parietal network preferentially activated to bodies, motion, and pain, (3) a temporo-amygdalar network responding to faces, social interaction, and speech, and (4) a fronto-insular network responding to pain, emotions, social interactions, and speech. Our results highlight the role of the pSTS in processing multiple aspects of social information, as well as the feasibility and efficiency of fMRI mapping under conditions that resemble the complexity of real life.
Ito, Norie; Barnes, Graham R; Fukushima, Junko; Fukushima, Kikuro; Warabi, Tateo
2013-08-01
Using a cue-dependent memory-based smooth-pursuit task previously applied to monkeys, we examined the effects of visual motion-memory on smooth-pursuit eye movements in normal human subjects and compared the results with those of the trained monkeys. These results were also compared with those during simple ramp-pursuit that did not require visual motion-memory. During memory-based pursuit, all subjects exhibited virtually no errors in either pursuit-direction or go/no-go selection. Tracking eye movements of humans and monkeys were similar in the two tasks, but tracking eye movements were different between the two tasks; latencies of the pursuit and corrective saccades were prolonged, initial pursuit eye velocity and acceleration were lower, peak velocities were lower, and time to reach peak velocities lengthened during memory-based pursuit. These characteristics were similar to anticipatory pursuit initiated by extra-retinal components during the initial extinction task of Barnes and Collins (J Neurophysiol 100:1135-1146, 2008b). We suggest that the differences between the two tasks reflect differences between the contribution of extra-retinal and retinal components. This interpretation is supported by two further studies: (1) during popping out of the correct spot to enhance retinal image-motion inputs during memory-based pursuit, pursuit eye velocities approached those during simple ramp-pursuit, and (2) during initial blanking of spot motion during memory-based pursuit, pursuit components appeared in the correct direction. Our results showed the importance of extra-retinal mechanisms for initial pursuit during memory-based pursuit, which include priming effects and extra-retinal drive components. Comparison with monkey studies on neuronal responses and model analysis suggested possible pathways for the extra-retinal mechanisms.
Kaneoke, Y; Urakawa, T; Kakigi, R
2009-05-19
We investigated whether direction information is represented in the population-level neural response evoked by the visual motion stimulus, as measured by magnetoencephalography. Coherent motions with varied speed, varied direction, and different coherence level were presented using random dot kinematography. Peak latency of responses to motion onset was inversely related to speed in all directions, as previously reported, but no significant effect of direction on latency changes was identified. Mutual information entropy (IE) calculated using four-direction response data increased significantly (>2.14) after motion onset in 41.3% of response data and maximum IE was distributed at approximately 20 ms after peak response latency. When response waveforms showing significant differences (by multivariate discriminant analysis) in distribution of the three waveform parameters (peak amplitude, peak latency, and 75% waveform width) with stimulus directions were analyzed, 87 waveform stimulus directions (80.6%) were correctly estimated using these parameters. Correct estimation rate was unaffected by stimulus speed, but was affected by coherence level, even though both speed and coherence affected response amplitude similarly. Our results indicate that speed and direction of stimulus motion are represented in the distinct properties of a response waveform, suggesting that the human brain processes speed and direction separately, at least in part.
On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.
Kim, Woosuk; Kim, Myunggyu
2018-03-19
In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.
Zhang, Dongwen; Zhu, Qingsong; Xiong, Jing; Wang, Lei
2014-04-27
In a deforming anatomic environment, the motion of an instrument suffers from complex geometrical and dynamic constraints, robot assisted minimally invasive surgery therefore requires more sophisticated skills for surgeons. This paper proposes a novel dynamic virtual fixture (DVF) to enhance the surgical operation accuracy of admittance-type medical robotics in the deforming environment. A framework for DVF on the Euclidean Group SE(3) is presented, which unites rotation and translation in a compact form. First, we constructed the holonomic/non-holonomic constraints, and then searched for the corresponded reference to make a distinction between preferred and non-preferred directions. Second, different control strategies are employed to deal with the tasks along the distinguished directions. The desired spatial compliance matrix is synthesized from an allowable motion screw set to filter out the task unrelated components from manual input, the operator has complete control over the preferred directions; while the relative motion between the surgical instrument and the anatomy structures is actively tracked and cancelled, the deviation relative to the reference is compensated jointly by the operator and DVF controllers. The operator, haptic device, admittance-type proxy and virtual deforming environment are involved in a hardware-in-the-loop experiment, human-robot cooperation with the assistance of DVF controller is carried out on a deforming sphere to simulate beating heart surgery, performance of the proposed DVF on admittance-type proxy is evaluated, and both human factors and control parameters are analyzed. The DVF can improve the dynamic properties of human-robot cooperation in a low-frequency (0 ~ 40 rad/sec) deforming environment, and maintain synergy of orientation and translation during the operation. Statistical analysis reveals that the operator has intuitive control over the preferred directions, human and the DVF controller jointly control the motion along the non-preferred directions, the target deformation is tracked actively. The proposed DVF for an admittance-type manipulator is capable of assisting the operator to deal with skilled operations in a deforming environment.
Impaired visual recognition of biological motion in schizophrenia.
Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee
2005-09-15
Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.
Human comfort response to random motions with a dominant vertical motion
NASA Technical Reports Server (NTRS)
Stone, R. W., Jr.
1975-01-01
Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with vertical acceleration inputs with various power spectra shapes and magnitudes. The data obtained are presented.
Day, B L; Steiger, M J; Thompson, P D; Marsden, C D
1993-09-01
1. Measurements of human upright body movements in three dimensions have been made on thirty-five male subjects attempting to stand still with various stance widths and with eyes closed or open. Body motion was inferred from movements of eight markers fixed to specific sites on the body from the shoulders to the ankles. Motion of these markers was recorded together with motion of the point of application of the resultant of the ground reaction forces (centre of pressure). 2. The speed of the body (average from eight sites) was increased by closing the eyes or narrowing the stance width and there was an interaction between these two factors such that vision reduced body speed more effectively when the feet were closer together. Similar relationships were found for components of velocity both in the frontal and sagittal planes although stance width exerted a much greater influence on the lateral velocity component. 3. Fluctuations in position of the body were also increased by eye closure or narrowing of stance width. Again, the effect of stance width was more potent for lateral than for anteroposterior movements. In contrast to the velocity measurements, there was no interaction between vision and stance width. 4. There was a progressive increase in the amplitude of position and velocity fluctuations from markers placed higher on the body. The fluctuations in the position of the centre of pressure were similar in magnitude to those of the markers placed near the hip. The fluctuations in velocity of centre of pressure, however, were greater than of any site on the body. 5. Analysis of the amplitude of angular motion between adjacent straight line segments joining the markers suggests that the inverted pendulum model of body sway is incomplete. Motion about the ankle joint was dominant only for lateral movement in the frontal plane with narrow stance widths (< 8 cm). For all other conditions most angular motion occurred between the trunk and leg. 6. The large reduction in lateral body motion with increasing stance width was mainly due to a disproportionate reduction in the angular motion about the ankles and feet. A mathematical model of the skeletal structure has been constructed which offers some explanation for this specific reduction in joint motion.(ABSTRACT TRUNCATED AT 400 WORDS)
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
U.S. Marine Corps Training Modeling and Simulation Master Plan
2007-01-18
is needed that is not restricted by line of sight (LOS) and is transportable/ deployable. • The LVC-TE must have the ability to have Human Anatomy Motion... Human Anatomy Motion-Tracking and Display HEAT.............................HMMWV Egress Assistance Trainer HLA
The effect of eccentricity and spatiotemporal energy on motion silencing.
Choi, Lark Kwon; Bovik, Alan C; Cormack, Lawrence K
2016-01-01
The now well-known motion-silencing illusion has shown that salient changes among a group of objects' luminances, colors, shapes, or sizes may appear to cease when objects move rapidly (Suchow & Alvarez, 2011). It has been proposed that silencing derives from dot spacing that causes crowding, coherent changes in object color or size, and flicker frequencies combined with dot spacing (Choi, Bovik, & Cormack, 2014; Peirce, 2013; Turi & Burr, 2013). Motion silencing is a peripheral effect that does not occur near the point of fixation. To better understand the effect of eccentricity on motion silencing, we measured the amount of motion silencing as a function of eccentricity in human observers using traditional psychophysics. Fifteen observers reported whether dots in any of four concentric rings changed in luminance over a series of rotational velocities. The results in the human experiments showed that the threshold velocity for motion silencing almost linearly decreases as a function of log eccentricity. Further, we modeled the response of a population of simulated V1 neurons to our stimuli. We found strong matches between the threshold velocities on motion silencing observed in the human experiment and those seen in the energy model of Adelson and Bergen (1985). We suggest the plausible explanation that as eccentricity increases, the combined motion-flicker signal falls outside the narrow spatiotemporal frequency response regions of the modeled receptive fields, thereby reducing flicker visibility.
de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro
2017-06-01
Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).
Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.
Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi
2016-07-01
Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
A comparison of form processing involved in the perception of biological and nonbiological movements
Thurman, Steven M.; Lu, Hongjing
2016-01-01
Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875
Biomechanical Evaluation of an Electric Power-Assisted Bicycle by a Musculoskeletal Model
NASA Astrophysics Data System (ADS)
Takehara, Shoichiro; Murakami, Musashi; Hase, Kazunori
In this study, we construct an evaluation system for the muscular activity of the lower limbs when a human pedals an electric power-assisted bicycle. The evaluation system is composed of an electric power-assisted bicycle, a numerical simulator and a motion capture system. The electric power-assisted bicycle in this study has a pedal with an attached force sensor. The numerical simulator for pedaling motion is a musculoskeletal model of a human. The motion capture system measures the joint angles of the lower limb. We examine the influence of the electric power-assisted force on each muscle of the human trunk and legs. First, an experiment of pedaling motion is performed. Then, the musculoskeletal model is calculated by using the experimental data. We discuss the influence on each muscle by electric power-assist. It is found that the muscular activity is decreased by the electric power-assist bicycle, and the reduction of the muscular force required for pedaling motion was quantitatively shown for every muscle.
Human Age Estimation Method Robust to Camera Sensor and/or Face Movement
Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung
2015-01-01
Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method. PMID:26334282
Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator
NASA Astrophysics Data System (ADS)
Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun
2017-07-01
Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.
Gait recognition based on Gabor wavelets and modified gait energy image for human identification
NASA Astrophysics Data System (ADS)
Huang, Deng-Yuan; Lin, Ta-Wei; Hu, Wu-Chih; Cheng, Chih-Hsiang
2013-10-01
This paper proposes a method for recognizing human identity using gait features based on Gabor wavelets and modified gait energy images (GEIs). Identity recognition by gait generally involves gait representation, extraction, and classification. In this work, a modified GEI convolved with an ensemble of Gabor wavelets is proposed as a gait feature. Principal component analysis is then used to project the Gabor-wavelet-based gait features into a lower-dimension feature space for subsequent classification. Finally, support vector machine classifiers based on a radial basis function kernel are trained and utilized to recognize human identity. The major contributions of this paper are as follows: (1) the consideration of the shadow effect to yield a more complete segmentation of gait silhouettes; (2) the utilization of motion estimation to track people when walkers overlap; and (3) the derivation of modified GEIs to extract more useful gait information. Extensive performance evaluation shows a great improvement of recognition accuracy due to the use of shadow removal, motion estimation, and gait representation using the modified GEIs and Gabor wavelets.
Rowe, P J; Crosbie, J; Fowler, V; Durward, B; Baer, G
1999-05-01
This paper reports the development, construction and use of a new system for the measurement of linear kinematics in one, two or three dimensions. The system uses a series of rotary shaft encoders and inelastic tensioned strings to measure the linear displacement of key anatomical points in space. The system is simple, inexpensive, portable, accurate and flexible. It is therefore suitable for inclusion in a variety of motion analysis studies. Details of the construction, calibration and interfacing of the device to an IBM PC computer are given as is a full mathematical description of the appropriate measurement theory for one, two and three dimensions. Examples of the results obtained from the device during gait, running, rising to stand, sitting down and pointing with the upper limb are given. Finally it is proposed that, provided the constraints of the system are considered, this method has the potential to measure a variety of functional human movements simply and inexpensively and may therefore be a valuable addition to the methods available to the motion scientist.
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Physiological and subjective evaluation of a human-robot object hand-over task.
Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël
2011-11-01
In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.
Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans
2015-09-30
animals in human care will be performed to test and validate this approach. The cadaver trials will enable controlled testing to failure or with both...quantitative metrics and analysis tools to assess the impact of a tag on the animal . Here we will present: 1) the characterization of the mechanical...fine scale motion analysis for swimming animals . 2 APPROACH Our approach is divided into four subtasks: Task 1: Forces and failure modes
NASA Astrophysics Data System (ADS)
Seregni, M.; Cerveri, P.; Riboldi, M.; Pella, A.; Baroni, G.
2012-11-01
In radiotherapy, organ motion mitigation by means of dynamic tumor tracking requires continuous information about the internal tumor position, which can be estimated relying on external/internal correlation models as a function of external surface surrogates. In this work, we propose a validation of a time-independent artificial neural networks-based tumor tracking method in the presence of changes in the breathing pattern, evaluating the performance on two datasets. First, simulated breathing motion traces were specifically generated to include gradually increasing respiratory irregularities. Then, seven publically available human liver motion traces were analyzed for the assessment of tracking accuracy, whose sensitivity with respect to the structural parameters of the model was also investigated. Results on simulated data showed that the proposed method was not affected by hysteretic target trajectories and it was able to cope with different respiratory irregularities, such as baseline drift and internal/external phase shift. The analysis of the liver motion traces reported an average RMS error equal to 1.10 mm, with five out of seven cases below 1 mm. In conclusion, this validation study proved that the proposed method is able to deal with respiratory irregularities both in controlled and real conditions.
Physics of active jamming during collective cellular motion in a monolayer.
Garcia, Simon; Hannezo, Edouard; Elgeti, Jens; Joanny, Jean-François; Silberzan, Pascal; Gov, Nir S
2015-12-15
Although collective cell motion plays an important role, for example during wound healing, embryogenesis, or cancer progression, the fundamental rules governing this motion are still not well understood, in particular at high cell density. We study here the motion of human bronchial epithelial cells within a monolayer, over long times. We observe that, as the monolayer ages, the cells slow down monotonously, while the velocity correlation length first increases as the cells slow down but eventually decreases at the slowest motions. By comparing experiments, analytic model, and detailed particle-based simulations, we shed light on this biological amorphous solidification process, demonstrating that the observed dynamics can be explained as a consequence of the combined maturation and strengthening of cell-cell and cell-substrate adhesions. Surprisingly, the increase of cell surface density due to proliferation is only secondary in this process. This analysis is confirmed with two other cell types. The very general relations between the mean cell velocity and velocity correlation lengths, which apply for aggregates of self-propelled particles, as well as motile cells, can possibly be used to discriminate between various parameter changes in vivo, from noninvasive microscopy data.
NASA Astrophysics Data System (ADS)
Hong, S. Lee; Bodfish, James W.; Newell, Karl M.
2006-03-01
We investigated the relationship between macroscopic entropy and microscopic complexity of the dynamics of body rocking and sitting still across adults with stereotyped movement disorder and mental retardation (profound and severe) against controls matched for age, height, and weight. This analysis was performed through the examination of center of pressure (COP) motion on the mediolateral (side-to-side) and anteroposterior (fore-aft) dimensions and the entropy of the relative phase between the two dimensions of motion. Intentional body rocking and stereotypical body rocking possessed similar slopes for their respective frequency spectra, but differences were revealed during maintenance of sitting postures. The dynamics of sitting in the control group produced lower spectral slopes and higher complexity (approximate entropy). In the controls, the higher complexity found on each dimension of motion was related to a weaker coupling between dimensions. Information entropy of the relative phase between the two dimensions of COP motion and irregularity (complexity) of their respective motions fitted a power-law function, revealing a relationship between macroscopic entropy and microscopic complexity across both groups and behaviors. This power-law relation affords the postulation that the organization of movement and posture dynamics occurs as a fractal process.
Physics of active jamming during collective cellular motion in a monolayer
Garcia, Simon; Hannezo, Edouard; Elgeti, Jens; Joanny, Jean-François; Silberzan, Pascal; Gov, Nir S.
2015-01-01
Although collective cell motion plays an important role, for example during wound healing, embryogenesis, or cancer progression, the fundamental rules governing this motion are still not well understood, in particular at high cell density. We study here the motion of human bronchial epithelial cells within a monolayer, over long times. We observe that, as the monolayer ages, the cells slow down monotonously, while the velocity correlation length first increases as the cells slow down but eventually decreases at the slowest motions. By comparing experiments, analytic model, and detailed particle-based simulations, we shed light on this biological amorphous solidification process, demonstrating that the observed dynamics can be explained as a consequence of the combined maturation and strengthening of cell−cell and cell−substrate adhesions. Surprisingly, the increase of cell surface density due to proliferation is only secondary in this process. This analysis is confirmed with two other cell types. The very general relations between the mean cell velocity and velocity correlation lengths, which apply for aggregates of self-propelled particles, as well as motile cells, can possibly be used to discriminate between various parameter changes in vivo, from noninvasive microscopy data. PMID:26627719
Deducing the reachable space from fingertip positions.
Hai-Trieu Pham; Pathirana, Pubudu N
2015-01-01
The reachable space of the hand has received significant interests in the past from relevant medical researchers and health professionals. The reachable space was often computed from the joint angles acquired from a motion capture system such as gloves or markers attached to each bone of the finger. However, the contact between the hand and device can cause difficulties particularly for hand with injuries, burns or experiencing certain dermatological conditions. This paper introduces an approach to find the reachable space of the hand in a non-contact measurement form utilizing the Leap Motion Controller. The approach is based on the analysis of each position in the motion path of the fingertip acquired by the Leap Motion Controller. For each position of the fingertip, the inverse kinematic problem was solved under the physiological multiple constraints of the human hand to find a set of all possible configurations of three finger joints. Subsequently, all the sets are unified to form a set of all possible configurations specific for that motion. Finally, a reachable space is computed from the configuration corresponding to the complete extension and the complete flexion of the finger joint angles in this set.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Song, Zhibin; Zhang, Songyuan
2016-01-01
Surface electromyography (sEMG) signals are closely related to the activation of human muscles and the motion of the human body, which can be used to estimate the dynamics of human limbs in the rehabilitation field. They also have the potential to be used in the application of bilateral rehabilitation, where hemiplegic patients can train their affected limbs following the motion of unaffected limbs via some rehabilitation devices. Traditional methods to process the sEMG focused on motion pattern recognition, namely, discrete patterns, which are not satisfactory for use in bilateral rehabilitation. In order to overcome this problem, in this paper, we built a relationship between sEMG signals and human motion in elbow flexion and extension on the sagittal plane. During the conducted experiments, four participants were required to perform elbow flexion and extension on the sagittal plane smoothly with only an inertia sensor in their hands, where forearm dynamics were not considered. In these circumstances, sEMG signals were weak compared to those with heavy loads or high acceleration. The contrastive experimental results show that continuous motion can also be obtained within an acceptable precision range. PMID:27775573
Song, Zhibin; Zhang, Songyuan
2016-10-19
Surface electromyography (sEMG) signals are closely related to the activation of human muscles and the motion of the human body, which can be used to estimate the dynamics of human limbs in the rehabilitation field. They also have the potential to be used in the application of bilateral rehabilitation, where hemiplegic patients can train their affected limbs following the motion of unaffected limbs via some rehabilitation devices. Traditional methods to process the sEMG focused on motion pattern recognition, namely, discrete patterns, which are not satisfactory for use in bilateral rehabilitation. In order to overcome this problem, in this paper, we built a relationship between sEMG signals and human motion in elbow flexion and extension on the sagittal plane. During the conducted experiments, four participants were required to perform elbow flexion and extension on the sagittal plane smoothly with only an inertia sensor in their hands, where forearm dynamics were not considered. In these circumstances, sEMG signals were weak compared to those with heavy loads or high acceleration. The contrastive experimental results show that continuous motion can also be obtained within an acceptable precision range.
Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling.
Ding, Meng; Fan, Guolian
2015-11-01
We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.
Decoding facial expressions based on face-selective and motion-sensitive areas.
Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin
2017-06-01
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Conceptualization of an exoskeleton Continuous Passive Motion(CPM) device using a link structure.
Kim, Kyu-Jung; Kang, Min-Sung; Choi, Youn-Sung; Han, Jungsoo; Han, Changsoo
2011-01-01
This study is about developing an exoskeleton Continuous Passive Motion (CPM) with the same Range of Motion (ROM) and instant center of rotation as the human knee. The key feature in constructing a CPM is an accurate alignment with the human knee joint enabling it to deliver the same movements as the actual body on the CPM. In this research, we proposed an exoskeleton knee joint through kinematic interpretation, measured the knee joint torque generated while using a CPM and applied it to the device. Thus, this new exoskeleton type CPM will allow precise alignment with the human knee joint, and follow the same ROM as the human knee in any position. © 2011 IEEE
Motion capture based identification of the human body inertial parameters.
Venture, Gentiane; Ayusawa, Ko; Nakamura, Yoshihiko
2008-01-01
Identification of body inertia, masses and center of mass is an important data to simulate, monitor and understand dynamics of motion, to personalize rehabilitation programs. This paper proposes an original method to identify the inertial parameters of the human body, making use of motion capture data and contact forces measurements. It allows in-vivo painless estimation and monitoring of the inertial parameters. The method is described and then obtained experimental results are presented and discussed.
Biomechanical analysis of the circular friction hand massage.
Ryu, Jeseong; Son, Jongsang; Ahn, Soonjae; Shin, Isu; Kim, Youngho
2015-01-01
A massage can be beneficial to relieve muscle tension on the neck and shoulder area. Various massage systems have been developed, but their motions are not uniform throughout different body parts nor specifically targeted to the neck and shoulder areas. Pressure pattern and finger movement trajectories of the circular friction hand massage on trapezius, levator scapulae, and deltoid muscles were determined to develop a massage system that can mimic the motion and the pressure of the circular friction massage. During the massage, finger movement trajectories were measured using a 3D motion capture system, and finger pressures were simultaneously obtained using a grip pressure sensor. Results showed that each muscle had different finger movement trajectory and pressure pattern. The trapezius muscle experienced a higher pressure, longer massage time (duration of pressurization), and larger pressure-time integral than the other muscles. These results could be useful to design a better massage system simulating human finger movements.
Syal, Karan; Shen, Simon; Yang, Yunze; Wang, Shaopeng; Haydel, Shelley E; Tao, Nongjian
2017-08-25
To combat antibiotic resistance, a rapid antibiotic susceptibility testing (AST) technology that can identify resistant infections at disease onset is required. Current clinical AST technologies take 1-3 days, which is often too slow for accurate treatment. Here we demonstrate a rapid AST method by tracking sub-μm scale bacterial motion with an optical imaging and tracking technique. We apply the method to clinically relevant bacterial pathogens, Escherichia coli O157: H7 and uropathogenic E. coli (UPEC) loosely tethered to a glass surface. By analyzing dose-dependent sub-μm motion changes in a population of bacterial cells, we obtain the minimum bactericidal concentration within 2 h using human urine samples spiked with UPEC. We validate the AST method using the standard culture-based AST methods. In addition to population studies, the method allows single cell analysis, which can identify subpopulations of resistance strains within a sample.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Hodgins, Jessica K; Deruelle, Christine
2015-02-01
The anthropomorphic bias describes the finding that the perceived naturalness of a biological motion decreases as the human-likeness of a computer-animated agent increases. To investigate the anthropomorphic bias in autistic children, human or cartoon characters were presented with biological and artificial motions side by side on a touchscreen. Children were required to touch one that would grow while the other would disappear, implicitly rewarding their choice. Only typically developing controls depicted the expected preference for biological motion when rendered with human, but not cartoon, characters. Despite performing the task to report a preference, children with autism depicted neither normal nor reversed anthropomorphic bias, suggesting that they are not sensitive to the congruence of form and motion information when observing computer-animated agents' actions. © The Author(s) 2014.
Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Fan, Rukun; Geng, Weidong
We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less
Modeling human behaviors and reactions under dangerous environment.
Kang, J; Wright, D K; Qin, S F; Zhao, Y
2005-01-01
This paper describes the framework of a real-time simulation system to model human behavior and reactions in dangerous environments. The system utilizes the latest 3D computer animation techniques, combined with artificial intelligence, robotics and psychology, to model human behavior, reactions and decision making under expected/unexpected dangers in real-time in virtual environments. The development of the system includes: classification on the conscious/subconscious behaviors and reactions of different people; capturing different motion postures by the Eagle Digital System; establishing 3D character animation models; establishing 3D models for the scene; planning the scenario and the contents; and programming within Virtools Dev. Programming within Virtools Dev is subdivided into modeling dangerous events, modeling character's perceptions, modeling character's decision making, modeling character's movements, modeling character's interaction with environment and setting up the virtual cameras. The real-time simulation of human reactions in hazardous environments is invaluable in military defense, fire escape, rescue operation planning, traffic safety studies, and safety planning in chemical factories, the design of buildings, airplanes, ships and trains. Currently, human motion modeling can be realized through established technology, whereas to integrate perception and intelligence into virtual human's motion is still a huge undertaking. The challenges here are the synchronization of motion and intelligence, the accurate modeling of human's vision, smell, touch and hearing, the diversity and effects of emotion and personality in decision making. There are three types of software platforms which could be employed to realize the motion and intelligence within one system, and their advantages and disadvantages are discussed.
Digital evaluation of sitting posture comfort in human-vehicle system under Industry 4.0 framework
NASA Astrophysics Data System (ADS)
Tao, Qing; Kang, Jinsheng; Sun, Wenlei; Li, Zhaobo; Huo, Xiao
2016-09-01
Most of the previous studies on the vibration ride comfort of the human-vehicle system were focused only on one or two aspects of the investigation. A hybrid approach which integrates all kinds of investigation methods in real environment and virtual environment is described. The real experimental environment includes the WBV(whole body vibration) test, questionnaires for human subjective sensation and motion capture. The virtual experimental environment includes the theoretical calculation on simplified 5-DOF human body vibration model, the vibration simulation and analysis within ADAMS/VibrationTM module, and the digital human biomechanics and occupational health analysis in Jack software. While the real experimental environment provides realistic and accurate test results, it also serves as core and validation for the virtual experimental environment. The virtual experimental environment takes full advantages of current available vibration simulation and digital human modelling software, and makes it possible to evaluate the sitting posture comfort in a human-vehicle system with various human anthropometric parameters. How this digital evaluation system for car seat comfort design is fitted in the Industry 4.0 framework is also proposed.
A neural model of motion processing and visual navigation by cortical area MST.
Grossberg, S; Mingolla, E; Pack, C
1999-12-01
Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.
Motion video analysis using planar parallax
NASA Astrophysics Data System (ADS)
Sawhney, Harpreet S.
1994-04-01
Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.
Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.
1997-01-01
The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.
Flies and humans share a motion estimation strategy that exploits natural scene statistics
Clark, Damon A.; Fitzgerald, James E.; Ales, Justin M.; Gohl, Daryl M.; Silies, Marion A.; Norcia, Anthony M.; Clandinin, Thomas R.
2014-01-01
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. Here we show that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extract triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations is retained even as light and dark edge motion signals are combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This striking convergence argues that statistical structures in natural scenes have profoundly affected visual processing, driving a common computational strategy over 500 million years of evolution. PMID:24390225
High-resolution motion-compensated imaging photoplethysmography for remote heart rate monitoring
NASA Astrophysics Data System (ADS)
Chung, Audrey; Wang, Xiao Yu; Amelard, Robert; Scharfenberger, Christian; Leong, Joanne; Kulinski, Jan; Wong, Alexander; Clausi, David A.
2015-03-01
We present a novel non-contact photoplethysmographic (PPG) imaging system based on high-resolution video recordings of ambient reflectance of human bodies that compensates for body motion and takes advantage of skin erythema fluctuations to improve measurement reliability for the purpose of remote heart rate monitoring. A single measurement location for recording the ambient reflectance is automatically identified on an individual, and the motion for the location is determined over time via measurement location tracking. Based on the determined motion information motion-compensated reflectance measurements at different wavelengths for the measurement location can be acquired, thus providing more reliable measurements for the same location on the human over time. The reflectance measurement is used to determine skin erythema fluctuations over time, resulting in the capture of a PPG signal with a high signal-to-noise ratio. To test the efficacy of the proposed system, a set of experiments involving human motion in a front-facing position were performed under natural ambient light. The experimental results demonstrated that skin erythema fluctuations can achieve noticeably improved average accuracy in heart rate measurement when compared to previously proposed non-contact PPG imaging systems.
The Default Mode Network Differentiates Biological From Non-Biological Motion
Dayan, Eran; Sella, Irit; Mukovskiy, Albert; Douek, Yehonatan; Giese, Martin A.; Malach, Rafael; Flash, Tamar
2016-01-01
The default mode network (DMN) has been implicated in an array of social-cognitive functions, including self-referential processing, theory of mind, and mentalizing. Yet, the properties of the external stimuli that elicit DMN activity in relation to these domains remain unknown. Previous studies suggested that motion kinematics is utilized by the brain for social-cognitive processing. Here, we used functional MRI to examine whether the DMN is sensitive to parametric manipulations of observed motion kinematics. Preferential responses within core DMN structures differentiating non-biological from biological kinematics were observed for the motion of a realistically looking, human-like avatar, but not for an abstract object devoid of human form. Differences in connectivity patterns during the observation of biological versus non-biological kinematics were additionally observed. Finally, the results additionally suggest that the DMN is coupled more strongly with key nodes in the action observation network, namely the STS and the SMA, when the observed motion depicts human rather than abstract form. These findings are the first to implicate the DMN in the perception of biological motion. They may reflect the type of information used by the DMN in social-cognitive processing. PMID:25217472
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Motions. 12.99 Section 12.99 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL FORMAL EVIDENTIARY PUBLIC HEARING Hearing Procedures § 12.99 Motions. (a) A motion on any matter relating to the proceeding is to be...
Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus.
Furl, Nicholas; Henson, Richard N; Friston, Karl J; Calder, Andrew J
2015-09-01
The superior temporal sulcus (STS) in the human and monkey is sensitive to the motion of complex forms such as facial and bodily actions. We used functional magnetic resonance imaging (fMRI) to explore network-level explanations for how the form and motion information in dynamic facial expressions might be combined in the human STS. Ventral occipitotemporal areas selective for facial form were localized in occipital and fusiform face areas (OFA and FFA), and motion sensitivity was localized in the more dorsal temporal area V5. We then tested various connectivity models that modeled communication between the ventral form and dorsal motion pathways. We show that facial form information modulated transmission of motion information from V5 to the STS, and that this face-selective modulation likely originated in OFA. This finding shows that form-selective motion sensitivity in the STS can be explained in terms of modulation of gain control on information flow in the motion pathway, and provides a substantial constraint for theories of the perception of faces and biological motion. © The Author 2014. Published by Oxford University Press.
Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi
2016-01-01
Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588
Gandhi, Anup A; Kode, Swathi; DeVries, Nicole A; Grosland, Nicole M; Smucker, Joseph D; Fredericks, Douglas C
2015-10-15
A biomechanical study comparing arthroplasty with fusion using human cadaveric C2-T1 spines. To compare the kinematics of the cervical spine after arthroplasty and fusion using single level, 2 level and hybrid constructs. Previous studies have shown that spinal levels adjacent to a fusion experience increased motion and higher stress which may lead to adjacent segment disc degeneration. Cervical arthroplasty achieves similar decompression but preserves the motion at the operated level, potentially decreasing the occurrence of adjacent segment disc degeneration. 11 specimens (C2-T1) were divided into 2 groups (BRYAN and PRESTIGE LP). The specimens were tested in the following order; intact, single level total disc replacement (TDR) at C5-C6, 2-level TDR at C5-C6-C7, fusion at C5-C6 and TDR at C6-C7 (Hybrid construct), and lastly a 2-level fusion. The intact specimens were tested up to a moment of 2.0 Nm. After each surgical intervention, the specimens were loaded until the primary motion (C2-T1) matched the motion of the respective intact state (hybrid control). An arthroplasty preserved motion at the implanted level and maintained normal motion at the nonoperative levels. Arthrodesis resulted in a significant decrease in motion at the fused level and an increase in motion at the unfused levels. In the hybrid construct, the TDR adjacent to fusion preserved motion at the arthroplasty level, thereby reducing the demand on the other levels. Cervical disc arthroplasty with both the BRYAN and PRESTIGE LP discs not only preserved the motion at the operated level, but also maintained the normal motion at the adjacent levels. Under simulated physiologic loading, the motion patterns of the spine with the BRYAN or PRESTIGE LP disc were very similar and were closer than fusion to the intact motion pattern. An adjacent segment disc replacement is biomechanically favorable to a fusion in the presence of a pre-existing fusion.
How many fish in a tank? Constructing an automated fish counting system by using PTV analysis
NASA Astrophysics Data System (ADS)
Abe, S.; Takagi, T.; Takehara, K.; Kimura, N.; Hiraishi, T.; Komeyama, K.; Torisawa, S.; Asaumi, S.
2017-02-01
Because escape from a net cage and mortality are constant problems in fish farming, health control and management of facilities are important in aquaculture. In particular, the development of an accurate fish counting system has been strongly desired for the Pacific Bluefin tuna farming industry owing to the high market value of these fish. The current fish counting method, which involves human counting, results in poor accuracy; moreover, the method is cumbersome because the aquaculture net cage is so large that fish can only be counted when they move to another net cage. Therefore, we have developed an automated fish counting system by applying particle tracking velocimetry (PTV) analysis to a shoal of swimming fish inside a net cage. In essence, we treated the swimming fish as tracer particles and estimated the number of fish by analyzing the corresponding motion vectors. The proposed fish counting system comprises two main components: image processing and motion analysis, where the image-processing component abstracts the foreground and the motion analysis component traces the individual's motion. In this study, we developed a Region Extraction and Centroid Computation (RECC) method and a Kalman filter and Chi-square (KC) test for the two main components. To evaluate the efficiency of our method, we constructed a closed system, placed an underwater video camera with a spherical curved lens at the bottom of the tank, and recorded a 360° view of a swimming school of Japanese rice fish (Oryzias latipes). Our study showed that almost all fish could be abstracted by the RECC method and the motion vectors could be calculated by the KC test. The recognition rate was approximately 90% when more than 180 individuals were observed within the frame of the video camera. These results suggest that the presented method has potential application as a fish counting system for industrial aquaculture.
Werner, René; Ehrhardt, Jan; Schmidt-Richberg, Alexander; Heiss, Anabell; Handels, Heinz
2010-11-01
Motivated by radiotherapy of lung cancer non- linear registration is applied to estimate 3D motion fields for local lung motion analysis in thoracic 4D CT images. Reliability of analysis results depends on the registration accuracy. Therefore, our study consists of two parts: optimization and evaluation of a non-linear registration scheme for motion field estimation, followed by a registration-based analysis of lung motion patterns. The study is based on 4D CT data of 17 patients. Different distance measures and force terms for thoracic CT registration are implemented and compared: sum of squared differences versus a force term related to Thirion's demons registration; masked versus unmasked force computation. The most accurate approach is applied to local lung motion analysis. Masked Thirion forces outperform the other force terms. The mean target registration error is 1.3 ± 0.2 mm, which is in the order of voxel size. Based on resulting motion fields and inter-patient normalization of inner lung coordinates and breathing depths a non-linear dependency between inner lung position and corresponding strength of motion is identified. The dependency is observed for all patients without or with only small tumors. Quantitative evaluation of the estimated motion fields indicates high spatial registration accuracy. It allows for reliable registration-based local lung motion analysis. The large amount of information encoded in the motion fields makes it possible to draw detailed conclusions, e.g., to identify the dependency of inner lung localization and motion. Our examinations illustrate the potential of registration-based motion analysis.
Evaluating Suit Fit Using Performance Degradation
NASA Technical Reports Server (NTRS)
Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar
2011-01-01
The Mark III suit has multiple sizes of suit components (arm, leg, and gloves) as well as sizing inserts to tailor the fit of the suit to an individual. This study sought to determine a way to identify the point an ideal suit fit transforms into a bad fit and how to quantify this breakdown using mobility-based physical performance data. This study examined the changes in human physical performance via degradation of the elbow and wrist range of motion of the planetary suit prototype (Mark III) with respect to changes in sizing and as well as how to apply that knowledge to suit sizing options and improvements in suit fit. The methods implemented in this study focused on changes in elbow and wrist mobility due to incremental suit sizing modifications. This incremental sizing was within a range that included both optimum and poor fit. Suited range of motion data was collected using a motion analysis system for nine isolated and functional tasks encompassing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm only. The results were then compared across sizing configurations. The results of this study indicate that range of motion may be used as a viable parameter to quantify at what stage suit sizing causes a detriment in performance; however the human performance decrement appeared to be based on the interaction of multiple joints along a limb, not a single joint angle. The study was able to identify a preliminary method to quantify the impact of size on performance and to develop a means to gauge tolerances around optimal size. More work is needed to improve the assessment of optimal fit and to compensate for multiple joint interactions.
Machine learning methods for classifying human physical activity from on-body accelerometers.
Mannini, Andrea; Sabatini, Angelo Maria
2010-01-01
The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series.
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
Brain-machine interfacing control of whole-body humanoid motion
Bouyarmane, Karim; Vaillant, Joris; Sugimoto, Norikazu; Keith, François; Furukawa, Jun-ichiro; Morimoto, Jun
2014-01-01
We propose to tackle in this paper the problem of controlling whole-body humanoid robot behavior through non-invasive brain-machine interfacing (BMI), motivated by the perspective of mapping human motor control strategies to human-like mechanical avatar. Our solution is based on the adequate reduction of the controllable dimensionality of a high-DOF humanoid motion in line with the state-of-the-art possibilities of non-invasive BMI technologies, leaving the complement subspace part of the motion to be planned and executed by an autonomous humanoid whole-body motion planning and control framework. The results are shown in full physics-based simulation of a 36-degree-of-freedom humanoid motion controlled by a user through EEG-extracted brain signals generated with motor imagery task. PMID:25140134
Walking pattern analysis and SVM classification based on simulated gaits.
Mao, Yuxiang; Saito, Masaru; Kanno, Takehiro; Wei, Daming; Muroi, Hiroyasu
2008-01-01
Three classes of walking patterns, normal, caution and danger, were simulated by tying elastic bands to joints of lower body. In order to distinguish one class from another, four local motions suggested by doctors were investigated stepwise, and differences between levels were evaluated using t-tests. The human adaptability in the tests was also evaluated. We improved average classification accuracy to 84.50% using multiclass support vector machine classifier and concluded that human adaptability is a factor that can cause obvious bias in contiguous data collections.
Processing Motion Signals in Complex Environments
NASA Technical Reports Server (NTRS)
Verghese, Preeti
2000-01-01
Motion information is critical for human locomotion and scene segmentation. Currently we have excellent neurophysiological models that are able to predict human detection and discrimination of local signals. Local motion signals are insufficient by themselves to guide human locomotion and to provide information about depth, object boundaries and surface structure. My research is aimed at understanding the mechanisms underlying the combination of motion signals across space and time. A target moving on an extended trajectory amidst noise dots in Brownian motion is much more detectable than the sum of signals generated by independent motion energy units responding to the trajectory segments. This result suggests that facilitation occurs between motion units tuned to similar directions, lying along the trajectory path. We investigated whether the interaction between local motion units along the motion direction is mediated by contrast. One possibility is that contrast-driven signals from motion units early in the trajectory sequence are added to signals in subsequent units. If this were the case, then units later in the sequence would have a larger signal than those earlier in the sequence. To test this possibility, we compared contrast discrimination thresholds for the first and third patches of a triplet of sequentially presented Gabor patches, aligned along the motion direction. According to this simple additive model, contrast increment thresholds for the third patch should be higher than thresholds for the first patch.The lack of a measurable effect on contrast thresholds for these various manipulations suggests that the pooling of signals along a trajectory is not mediated by contrast-driven signals. Instead, these results are consistent with models that propose that the facilitation of trajectory signals is achieved by a second-level network that chooses the strongest local motion signals and combines them if they occur in a spatio-temporal sequence consistent with a trajectory. These results parallel the lack of increased apparent contrast along a static contour made up of similarly oriented elements.
Micron-scale coherence in interphase chromatin dynamics
Zidovska, Alexandra; Weitz, David A.; Mitchison, Timothy J.
2013-01-01
Chromatin structure and dynamics control all aspects of DNA biology yet are poorly understood, especially at large length scales. We developed an approach, displacement correlation spectroscopy based on time-resolved image correlation analysis, to map chromatin dynamics simultaneously across the whole nucleus in cultured human cells. This method revealed that chromatin movement was coherent across large regions (4–5 µm) for several seconds. Regions of coherent motion extended beyond the boundaries of single-chromosome territories, suggesting elastic coupling of motion over length scales much larger than those of genes. These large-scale, coupled motions were ATP dependent and unidirectional for several seconds, perhaps accounting for ATP-dependent directed movement of single genes. Perturbation of major nuclear ATPases such as DNA polymerase, RNA polymerase II, and topoisomerase II eliminated micron-scale coherence, while causing rapid, local movement to increase; i.e., local motions accelerated but became uncoupled from their neighbors. We observe similar trends in chromatin dynamics upon inducing a direct DNA damage; thus we hypothesize that this may be due to DNA damage responses that physically relax chromatin and block long-distance communication of forces. PMID:24019504
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Peng, Zhen; Braun, Daniel A.
2015-01-01
In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896
1979-09-01
a " high performance fast timing" engine thrust with a mismatch between right and left SRfls...examine the dynamic behavior of a blade having a root geometry compatible with low frictional forces at high rotational speeds , somewhat like a "Christmas...Tree" root, but with a gap introduced which will close up only at high speed . Approximate non-linear equations of motion are derived and solved
Multi-level manual and autonomous control superposition for intelligent telerobot
NASA Technical Reports Server (NTRS)
Hirai, Shigeoki; Sato, T.
1989-01-01
Space telerobots are recognized to require cooperation with human operators in various ways. Multi-level manual and autonomous control superposition in telerobot task execution is described. The object model, the structured master-slave manipulation system, and the motion understanding system are proposed to realize the concept. The object model offers interfaces for task level and object level human intervention. The structured master-slave manipulation system offers interfaces for motion level human intervention. The motion understanding system maintains the consistency of the knowledge through all the levels which supports the robot autonomy while accepting the human intervention. The superposing execution of the teleoperational task at multi-levels realizes intuitive and robust task execution for wide variety of objects and in changeful environment. The performance of several examples of operating chemical apparatuses is shown.
Reaction trajectory revealed by a joint analysis of protein data bank.
Ren, Zhong
2013-01-01
Structural motions along a reaction pathway hold the secret about how a biological macromolecule functions. If each static structure were considered as a snapshot of the protein molecule in action, a large collection of structures would constitute a multidimensional conformational space of an enormous size. Here I present a joint analysis of hundreds of known structures of human hemoglobin in the Protein Data Bank. By applying singular value decomposition to distance matrices of these structures, I demonstrate that this large collection of structural snapshots, derived under a wide range of experimental conditions, arrange orderly along a reaction pathway. The structural motions along this extensive trajectory, including several helical transformations, arrive at a reverse engineered mechanism of the cooperative machinery (Ren, companion article), and shed light on pathological properties of the abnormal homotetrameric hemoglobins from α-thalassemia. This method of meta-analysis provides a general approach to structural dynamics based on static protein structures in this post genomics era.
Reaction Trajectory Revealed by a Joint Analysis of Protein Data Bank
Ren, Zhong
2013-01-01
Structural motions along a reaction pathway hold the secret about how a biological macromolecule functions. If each static structure were considered as a snapshot of the protein molecule in action, a large collection of structures would constitute a multidimensional conformational space of an enormous size. Here I present a joint analysis of hundreds of known structures of human hemoglobin in the Protein Data Bank. By applying singular value decomposition to distance matrices of these structures, I demonstrate that this large collection of structural snapshots, derived under a wide range of experimental conditions, arrange orderly along a reaction pathway. The structural motions along this extensive trajectory, including several helical transformations, arrive at a reverse engineered mechanism of the cooperative machinery (Ren, companion article), and shed light on pathological properties of the abnormal homotetrameric hemoglobins from α-thalassemia. This method of meta-analysis provides a general approach to structural dynamics based on static protein structures in this post genomics era. PMID:24244274
2008-07-02
CAPE CANAVERAL, Fla. – Professor Peter Voci, NYIT MOCAP (Motion Capture) team director, (left) hands a component of the Orion Crew Module mockup to one of three technicians inside the mockup. The technicians wear motion capture suits. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.
Protocol for an Experiment on Controlling Motion Sickness Severity in a Ship Motion Simulator
2004-10-01
MUN) School of Human Kinetics and Recreation, and Mr. Anthony Patterson and Mr. Carl Harris at the MUN Centre for Marine Simulation (CMS), for their...research contract with DRDC Atlantic. Dr. S.N. MacKinnon, Director of the Human Performance in Harsh Environments Laboratory, School of Human Kinetics and...MacKinnon, School of Human Kinetics and Recreation 737-8807 or smackinn@mun.ca DRDC Atlantic TM 2004-282 13 Annex B: Subject Consent Form
Yoganandan, Narayan; Pintar, Frank A; Stemper, Brian D; Wolfla, Christopher E; Shender, Barry S; Paskoff, Glenn
2007-05-01
Aging, trauma, or degeneration can affect intervertebral kinematics. While in vivo studies can determine motions, moments are not easily quantified. Previous in vitro studies on the cervical spine have largely used specimens from older individuals with varying levels of degeneration and have shown that moment-rotation responses under lateral bending do not vary significantly by spinal level. The objective of the present in vitro biomechanical study was, therefore, to determine the coronal and axial moment-rotation responses of degeneration-free, normal, intact human cadaveric cervicothoracic spinal columns under the lateral bending mode. Nine human cadaveric cervical columns from C2 to T1 were fixed at both ends. The donors had ranged from twenty-three to forty-four years old (mean, thirty-four years) at the time of death. Retroreflective targets were inserted into each vertebra to obtain rotational kinematics in the coronal and axial planes. The specimens were subjected to pure lateral bending moment with use of established techniques. The range-of-motion and neutral zone metrics for the coronal and axial rotation components were determined at each level of the spinal column and were evaluated statistically. Statistical analysis indicated that the two metrics were level-dependent (p < 0.05). Coronal motions were significantly greater (p < 0.05) than axial motions. Moment-rotation responses were nonlinear for both coronal and axial rotation components under lateral bending moments. Each segmental curve for both rotation components was well represented by a logarithmic function (R(2) > 0.95). Range-of-motion metrics compared favorably with those of in vivo investigations. Coronal and axial motions of degeneration-free cervical spinal columns under lateral bending showed substantially different level-dependent responses. The presentation of moment-rotation corridors for both metrics forms a normative dataset for the degeneration-free cervical spines.
Behavioural evidence for distinct mechanisms related to global and biological motion perception.
Miller, Louisa; Agnew, Hannah C; Pilz, Karin S
2018-01-01
The perception of human motion is a vital ability in our daily lives. Human movement recognition is often studied using point-light stimuli in which dots represent the joints of a moving person. Depending on task and stimulus, the local motion of the single dots, and the global form of the stimulus can be used to discriminate point-light stimuli. Previous studies often measured motion coherence for global motion perception and contrasted it with performance in biological motion perception to assess whether difficulties in biological motion processing are related to more general difficulties with motion processing. However, it is so far unknown as to how performance in global motion tasks relates to the ability to use local motion or global form to discriminate point-light stimuli. Here, we investigated this relationship in more detail. In Experiment 1, we measured participants' ability to discriminate the facing direction of point-light stimuli that contained primarily local motion, global form, or both. In Experiment 2, we embedded point-light stimuli in noise to assess whether previously found relationships in task performance are related to the ability to detect signal in noise. In both experiments, we also assessed motion coherence thresholds from random-dot kinematograms. We found relationships between performances for the different biological motion stimuli, but performance for global and biological motion perception was unrelated. These results are in accordance with previous neuroimaging studies that highlighted distinct areas for global and biological motion perception in the dorsal pathway, and indicate that results regarding the relationship between global motion perception and biological motion perception need to be interpreted with caution. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Energy Harvesting from Upper-Limb Pulling Motions for Miniaturized Human-Powered Generators
Yeo, Jeongjin; Ryu, Mun-ho; Yang, Yoonseok
2015-01-01
The human-powered self-generator provides the best solution for individuals who need an instantaneous power supply for travel, outdoor, and emergency use, since it is less dependent on weather conditions and occupies less space than other renewable power supplies. However, many commercial portable self-generators that employ hand-cranking are not used as much as expected in daily lives although they have enough output capacity due to their intensive workload. This study proposes a portable human-powered generator which is designed to obtain mechanical energy from an upper limb pulling motion for improved human motion economy as well as efficient human-mechanical power transfer. A coreless axial-flux permanent magnet machine (APMM) and a flywheel magnet rotor were used in conjunction with a one-way clutched power transmission system in order to obtain effective power from the pulling motion. The developed prototype showed an average energy conversion efficiency of 30.98% and an average output power of 0.32 W with a maximum of 1.89 W. Its small form factor (50 mm × 32 mm × 43.5 mm, 0.05 kg) and the substantial electricity produced verify the effectiveness of the proposed method in the utilization of human power. It is expected that the developed generator could provide a mobile power supply. PMID:26151204
Energy Harvesting from Upper-Limb Pulling Motions for Miniaturized Human-Powered Generators.
Yeo, Jeongjin; Ryu, Mun-ho; Yang, Yoonseok
2015-07-03
The human-powered self-generator provides the best solution for individuals who need an instantaneous power supply for travel, outdoor, and emergency use, since it is less dependent on weather conditions and occupies less space than other renewable power supplies. However, many commercial portable self-generators that employ hand-cranking are not used as much as expected in daily lives although they have enough output capacity due to their intensive workload. This study proposes a portable human-powered generator which is designed to obtain mechanical energy from an upper limb pulling motion for improved human motion economy as well as efficient human-mechanical power transfer. A coreless axial-flux permanent magnet machine (APMM) and a flywheel magnet rotor were used in conjunction with a one-way clutched power transmission system in order to obtain effective power from the pulling motion. The developed prototype showed an average energy conversion efficiency of 30.98% and an average output power of 0.32 W with a maximum of 1.89 W. Its small form factor (50 mm × 32 mm × 43.5 mm, 0.05 kg) and the substantial electricity produced verify the effectiveness of the proposed method in the utilization of human power. It is expected that the developed generator could provide a mobile power supply.
Binocular eye movement control and motion perception: what is being tracked?
van der Steen, Johannes; Dits, Joyce
2012-10-19
We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.
Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
van der Steen, Johannes; Dits, Joyce
2012-01-01
Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286
Discriminating Rigid from Nonrigid Motion
1989-07-31
motion can be given a three-dimensional interpretation using a constraint of rigidity. Kruppa’s result and others (Faugeras & Maybank , 1989; Huang...Experimental Psychology: Human Perception and Performance, 10, 1-11. Faugeras, 0., & Maybank , S. (1989). Motion from point matches: multiplicity of
Scavenging energy from human limb motions
NASA Astrophysics Data System (ADS)
Fan, Kangqi; Yu, Bo; Tang, Lihua
2017-04-01
This paper proposes a nonlinear piezoelectric energy harvester (PEH) to scavenge energy from human limb motions. The proposed PEH is composed of a ferromagnetic ball, a sleeve, and two piezoelectric cantilever beams each with a magnetic tip mass. The ball is used to sense the swing motions of human limbs and excite the beams to vibrate. The two beams, which are sensitive to the excitation along the radialis or tibial axis, generate electrical outputs. Theoretical and experimental studies are carried out to examine the performance of the proposed PEH when it is fixed at the wrist, thigh and ankle of a male who travels at constant velocities of 2 km/h, 4 km/h, 6 km/h, and 8 km/h on a treadmill. The results indicate that the low-frequency swing motions of human limbs are converted to higher-frequency vibrations of piezoelectric beams. During each gait cycle, different excitations produced by human limbs can be superposed and multiple peaks in the voltage output can be generated by the proposed PEH. Moreover, the voltage outputs of the PEH increase monotonously with the walking speed, and the maximum effective voltage is obtained when the PEH is mounted at the ankle under the walking speed of 8 km/h.
An Exoskeleton Robot for Human Forearm and Wrist Motion Assist
NASA Astrophysics Data System (ADS)
Ranathunga Arachchilage Ruwan Chandra Gopura; Kiguchi, Kazuo
The exoskeleton robot is worn by the human operator as an orthotic device. Its joints and links correspond to those of the human body. The same system operated in different modes can be used for different fundamental applications; a human-amplifier, haptic interface, rehabilitation device and assistive device sharing a portion of the external load with the operator. We have been developing exoskeleton robots for assisting the motion of physically weak individuals such as elderly or slightly disabled in daily life. In this paper, we propose a three degree of freedom (3DOF) exoskeleton robot (W-EXOS) for the forearm pronation/ supination motion, wrist flexion/extension motion and ulnar/radial deviation. The paper describes the wrist anatomy toward the development of the exoskeleton robot, the hardware design of the exoskeleton robot and EMG-based control method. The skin surface electromyographic (EMG) signals of muscles in forearm of the exoskeletons' user and the hand force/forearm torque are used as input information for the controller. By applying the skin surface EMG signals as main input signals to the controller, automatic control of the robot can be realized without manipulating any other equipment. Fuzzy control method has been applied to realize the natural and flexible motion assist. Experiments have been performed to evaluate the proposed exoskeleton robot and its control method.
The processing of social stimuli in early infancy: from faces to biological motion perception.
Simion, Francesca; Di Giorgio, Elisa; Leo, Irene; Bardi, Lara
2011-01-01
There are several lines of evidence which suggests that, since birth, the human system detects social agents on the basis of at least two properties: the presence of a face and the way they move. This chapter reviews the infant research on the origin of brain specialization for social stimuli and on the role of innate mechanisms and perceptual experience in shaping the development of the social brain. Two lines of convergent evidence on face detection and biological motion detection will be presented to demonstrate the innate predispositions of the human system to detect social stimuli at birth. As for face detection, experiments will be presented to demonstrate that, by virtue of nonspecific attentional biases, a very coarse template of faces become active at birth. As for biological motion detection, studies will be presented to demonstrate that, since birth, the human system is able to detect social stimuli on the basis of their properties such as the presence of a semi-rigid motion named biological motion. Overall, the empirical evidence converges in supporting the notion that the human system begins life broadly tuned to detect social stimuli and that the progressive specialization will narrow the system for social stimuli as a function of experience. Copyright © 2011 Elsevier B.V. All rights reserved.
Human pelvis motions when walking and when riding a therapeutic horse.
Garner, Brian A; Rigby, B Rhett
2015-02-01
A prevailing rationale for equine assisted therapies is that the motion of a horse can provide sensory stimulus and movement patterns that mimic those of natural human activities such as walking. The purpose of this study was to quantitatively measure and compare human pelvis motions when walking to those when riding a horse. Six able-bodied children (inexperienced riders, 8-12years old) participated in over-ground trials of self-paced walking and leader-paced riding on four different horses. Five kinematic measures were extracted from three-dimensional pelvis motion data: anteroposterior, superoinferior, and mediolateral translations, list angle about the anteroposterior axis, and twist angle about the superoinferior axis. There was generally as much or more variability in motion range observed between riding on the different horses as between riding and walking. Pelvis trajectories exhibited many similar features between walking and riding, including distorted lemniscate patterns in the transverse and frontal planes. In the sagittal plane the pelvis trajectory during walking exhibited a somewhat circular pattern whereas during riding it exhibited a more diagonal pattern. This study shows that riding on a horse can generate movement patterns in the human pelvis that emulate many, but not all, characteristics of those during natural walking. Copyright © 2014 Elsevier B.V. All rights reserved.
Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M; Nuckley, David; Carlis, John; Keefe, Daniel F
2014-12-01
In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.
Efficiencies for parts and wholes in biological-motion perception.
Bromfield, W Drew; Gold, Jason M
2017-10-01
People can reliably infer the actions, intentions, and mental states of fellow humans from body movements (Blake & Shiffrar, 2007). Previous research on such biological-motion perception has suggested that the movements of the feet may play a particularly important role in making certain judgments about locomotion (Chang & Troje, 2009; Troje & Westhoff, 2006). One account of this effect is that the human visual system may have evolved specialized processes that are efficient for extracting information carried by the feet (Troje & Westhoff, 2006). Alternatively, the motion of the feet may simply be more discriminable than that of other parts of the body. To dissociate these two possibilities, we measured people's ability to discriminate the walking direction of stimuli in which individual body parts (feet, hands) were removed or shown in isolation. We then compared human performance to that of a statistically optimal observer (Gold, Tadin, Cook, & Blake, 2008), giving us a measure of humans' discriminative ability independent of the information available (a quantity known as efficiency). We found that efficiency was highest when the hands and the feet were shown in isolation. A series of follow-up experiments suggested that observers were relying on a form-based cue with the isolated hands (specifically, the orientation of their path through space) and a motion-based cue with the isolated feet to achieve such high efficiencies. We relate our findings to previous proposals of a distinction between form-based and motion-based mechanisms in biological-motion perception.
NASA Astrophysics Data System (ADS)
Lee, Taek-Soo; Frey, Eric C.; Tsui, Benjamin M. W.
2015-04-01
This paper presents two 4D mathematical observer models for the detection of motion defects in 4D gated medical images. Their performance was compared with results from human observers in detecting a regional motion abnormality in simulated 4D gated myocardial perfusion (MP) SPECT images. The first 4D mathematical observer model extends the conventional channelized Hotelling observer (CHO) based on a set of 2D spatial channels and the second is a proposed model that uses a set of 4D space-time channels. Simulated projection data were generated using the 4D NURBS-based cardiac-torso (NCAT) phantom with 16 gates/cardiac cycle. The activity distribution modelled uptake of 99mTc MIBI with normal perfusion and a regional wall motion defect. An analytical projector was used in the simulation and the filtered backprojection (FBP) algorithm was used in image reconstruction followed by spatial and temporal low-pass filtering with various cut-off frequencies. Then, we extracted 2D image slices from each time frame and reorganized them into a set of cine images. For the first model, we applied 2D spatial channels to the cine images and generated a set of feature vectors that were stacked for the images from different slices of the heart. The process was repeated for each of the 1,024 noise realizations, and CHO and receiver operating characteristics (ROC) analysis methodologies were applied to the ensemble of the feature vectors to compute areas under the ROC curves (AUCs). For the second model, a set of 4D space-time channels was developed and applied to the sets of cine images to produce space-time feature vectors to which the CHO methodology was applied. The AUC values of the second model showed better agreement (Spearman’s rank correlation (SRC) coefficient = 0.8) to human observer results than those from the first model (SRC coefficient = 0.4). The agreement with human observers indicates the proposed 4D mathematical observer model provides a good predictor of the performance of human observers in detecting regional motion defects in 4D gated MP SPECT images. The result supports the use of the observer model in the optimization and evaluation of 4D image reconstruction and compensation methods for improving the detection of motion abnormalities in 4D gated MP SPECT images.
Local statistics of retinal optic flow for self-motion through natural sceneries.
Calow, Dirk; Lappe, Markus
2007-12-01
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.
Raudies, Florian; Neumann, Heiko
2012-01-01
The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd. PMID:23300930
Surface EMG signals based motion intent recognition using multi-layer ELM
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Qi, Lin; Wang, Xiao
2017-11-01
The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.
Eye Movements in Darkness Modulate Self-Motion Perception.
Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter
2017-01-01
During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.
Eye Movements in Darkness Modulate Self-Motion Perception
Pomante, Antonella
2017-01-01
Abstract During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. PMID:28144623
Representation of visual gravitational motion in the human vestibular cortex.
Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco
2005-04-15
How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.
Muscle Motion Solenoid Actuator
NASA Astrophysics Data System (ADS)
Obata, Shuji
It is one of our dreams to mechanically recover the lost body for damaged humans. Realistic humanoid robots composed of such machines require muscle motion actuators controlled by all pulling actions. Particularly, antagonistic pairs of bi-articular muscles are very important in animal's motions. A system of actuators is proposed using the electromagnetic force of the solenoids with the abilities of the stroke length over 10 cm and the strength about 20 N, which are needed to move the real human arm. The devised actuators are based on developments of recent modern electro-magnetic materials, where old time materials can not give such possibility. Composite actuators are controlled by a high ability computer and software making genuine motions.
Monitoring of atopic dermatitis using leaky coaxial cable.
Dong, Binbin; Ren, Aifeng; Shah, Syed Aziz; Hu, Fangming; Zhao, Nan; Yang, Xiaodong; Haider, Daniyal; Zhang, Zhiya; Zhao, Wei; Abbasi, Qammer Hussain
2017-12-01
In our daily life, inadvertent scratching may increase the severity of skin diseases (such as atopic dermatitis etc.). However, people rarely pay attention to this matter, so the known measurement behaviour of the movement is also very little. Nevertheless, the behaviour and frequency of scratching represent the degree of itching, and the analysis of scratching frequency is helpful to the doctor's clinical dosage. In this Letter, a novel system is proposed to monitor the scratching motion of a sleeping human body at night. The core device of the system is just a leaky coaxial cable (LCX) and a router. Commonly, LCX is used in the blind field or semi-blindfield in wireless communication. The new idea is that the leaky cable is placed on the bed, and then the state information of physical layer of wireless communication channels is acquired to identify the scratching motion and other small body movements in the human sleep process. The results show that it can be used to detect the movement and its duration. Channel state information (CSI) packet is collected by card installed in the computer based on the 802.11n protocol. The characterisation of the scratch motion in the collected CSI is unique, so it can be distinguished from the wireless channel amplitude variation trend.
Monitoring of atopic dermatitis using leaky coaxial cable
Dong, Binbin; Ren, Aifeng; Shah, Syed Aziz; Hu, Fangming; Zhao, Nan; Haider, Daniyal; Zhang, Zhiya; Zhao, Wei; Abbasi, Qammer Hussain
2017-01-01
In our daily life, inadvertent scratching may increase the severity of skin diseases (such as atopic dermatitis etc.). However, people rarely pay attention to this matter, so the known measurement behaviour of the movement is also very little. Nevertheless, the behaviour and frequency of scratching represent the degree of itching, and the analysis of scratching frequency is helpful to the doctor's clinical dosage. In this Letter, a novel system is proposed to monitor the scratching motion of a sleeping human body at night. The core device of the system is just a leaky coaxial cable (LCX) and a router. Commonly, LCX is used in the blind field or semi-blindfield in wireless communication. The new idea is that the leaky cable is placed on the bed, and then the state information of physical layer of wireless communication channels is acquired to identify the scratching motion and other small body movements in the human sleep process. The results show that it can be used to detect the movement and its duration. Channel state information (CSI) packet is collected by card installed in the computer based on the 802.11n protocol. The characterisation of the scratch motion in the collected CSI is unique, so it can be distinguished from the wireless channel amplitude variation trend. PMID:29383259
Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris
2012-01-01
Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639
INS integrated motion analysis for autonomous vehicle navigation
NASA Technical Reports Server (NTRS)
Roberts, Barry; Bazakos, Mike
1991-01-01
The use of inertial navigation system (INS) measurements to enhance the quality and robustness of motion analysis techniques used for obstacle detection is discussed with particular reference to autonomous vehicle navigation. The approach to obstacle detection used here employs motion analysis of imagery generated by a passive sensor. Motion analysis of imagery obtained during vehicle travel is used to generate range measurements to points within the field of view of the sensor, which can then be used to provide obstacle detection. Results obtained with an INS integrated motion analysis approach are reviewed.
Paris, Guillaume; Ramseyer, Christophe; Enescu, Mironel
2014-05-01
The conformational dynamics of human serum albumin (HSA) was investigated by principal component analysis (PCA) applied to three molecular dynamics trajectories of 200 ns each. The overlap of the essential subspaces spanned by the first 10 principal components (PC) of different trajectories was about 0.3 showing that the PCA based on a trajectory length of 200 ns is not completely convergent for this protein. The contributions of the relative motion of subdomains and of the subdomains (internal) distortion to the first 10 PCs were found to be comparable. Based on the distribution of the first 3 PC, 10 protein conformers are identified showing relative root mean square deviations (RMSD) between 2.3 and 4.6 Å. The main PCs are found to be delocalized over the whole protein structure indicating that the motions of different protein subdomains are coupled. This coupling is considered as being related to the allosteric effects observed upon ligand binding to HSA. On the other hand, the first PC of one of the three trajectories describes a conformational transition of the protein domain I that is close to that experimentally observed upon myristate binding. This is a theoretical support for the older hypothesis stating that changes of the protein onformation favorable to binding can precede the ligand complexation. A detailed all atoms PCA performed on the primary Sites 1 and 2 confirms the multiconformational character of the HSA binding sites as well as the significant coupling of their motions. Copyright © 2013 Wiley Periodicals, Inc.
A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.
Ligorio, Gabriele; Sabatini, Angelo M
2015-08-01
Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.
Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P
2017-03-01
In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
Generating Concise Rules for Human Motion Retrieval
NASA Astrophysics Data System (ADS)
Mukai, Tomohiko; Wakisaka, Ken-Ichi; Kuriyama, Shigeru
This paper proposes a method for retrieving human motion data with concise retrieval rules based on the spatio-temporal features of motion appearance. Our method first converts motion clip into a form of clausal language that represents geometrical relations between body parts and their temporal relationship. A retrieval rule is then learned from the set of manually classified examples using inductive logic programming (ILP). ILP automatically discovers the essential rule in the same clausal form with a user-defined hypothesis-testing procedure. All motions are indexed using this clausal language, and the desired clips are retrieved by subsequence matching using the rule. Such rule-based retrieval offers reasonable performance and the rule can be intuitively edited in the same language form. Consequently, our method enables efficient and flexible search from a large dataset with simple query language.
Octopus: A Design Methodology for Motion Capture Wearables
2017-01-01
Human motion capture (MoCap) is widely recognised for its usefulness and application in different fields, such as health, sports, and leisure; therefore, its inclusion in current wearables (MoCap-wearables) is increasing, and it may be very useful in a context of intelligent objects interconnected with each other and to the cloud in the Internet of Things (IoT). However, capturing human movement adequately requires addressing difficult-to-satisfy requirements, which means that the applications that are possible with this technology are held back by a series of accessibility barriers, some technological and some regarding usability. To overcome these barriers and generate products with greater wearability that are more efficient and accessible, factors are compiled through a review of publications and market research. The result of this analysis is a design methodology called Octopus, which ranks these factors and schematises them. Octopus provides a tool that can help define design requirements for multidisciplinary teams, generating a common framework and offering a new method of communication between them. PMID:28809786
Octopus: A Design Methodology for Motion Capture Wearables.
Marin, Javier; Blanco, Teresa; Marin, Jose J
2017-08-15
Human motion capture (MoCap) is widely recognised for its usefulness and application in different fields, such as health, sports, and leisure; therefore, its inclusion in current wearables (MoCap-wearables) is increasing, and it may be very useful in a context of intelligent objects interconnected with each other and to the cloud in the Internet of Things (IoT). However, capturing human movement adequately requires addressing difficult-to-satisfy requirements, which means that the applications that are possible with this technology are held back by a series of accessibility barriers, some technological and some regarding usability. To overcome these barriers and generate products with greater wearability that are more efficient and accessible, factors are compiled through a review of publications and market research. The result of this analysis is a design methodology called Octopus, which ranks these factors and schematises them. Octopus provides a tool that can help define design requirements for multidisciplinary teams, generating a common framework and offering a new method of communication between them.
No Evidence for Impaired Perception of Biological Motion in Adults with Autistic Spectrum Disorders
ERIC Educational Resources Information Center
Murphy, Patrick; Brady, Nuala; Fitzgerald, Michael; Troje, Nikolaus F.
2009-01-01
A central feature of autistic spectrum disorders (ASDs) is a difficulty in identifying and reading human expressions, including those present in the moving human form. One previous study, by Blake et al. (2003), reports decreased sensitivity for perceiving biological motion in children with autism, suggesting that perceptual anomalies underlie…
NASA Astrophysics Data System (ADS)
Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo
2016-10-01
Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.
Limb locomotion--speed distribution analysis as a new method for stance phase detection.
Peham, C; Scheidl, M; Licka, T
1999-10-01
The stance phase is used for the determination of many parameters in motion analysis. In this technical note the authors present a new kinematical method for determination of stance phase. From the high-speed video data, the speed distribution of the horizontal motion of the distal limb is calculated. The speed with the maximum occurrence within the motion cycle defines the stance phase, and this speed is used as threshold for beginning and end of the stance phase. In seven horses the results obtained with the presented method were compared to synchronous stance phase determination using a force plate integrated in a hard track. The mean difference between the results was 10.8 ms, equalling 1.44% of mean stance phase duration. As a test, the presented method was applied to a horse trotting on the treadmill, and to a human walking on concrete. This article describes an easy and safe method for stance phase determination in continuous kinematic data and proves the reliability of the method by comparing it to kinetic stance phase detection. This method may be applied in several species and all gaits, on the treadmill and on firm ground.
Oguz, Ozgur S; Zhou, Zhehua; Glasauer, Stefan; Wollherr, Dirk
2018-04-03
Human motor control is highly efficient in generating accurate and appropriate motor behavior for a multitude of tasks. This paper examines how kinematic and dynamic properties of the musculoskeletal system are controlled to achieve such efficiency. Even though recent studies have shown that the human motor control relies on multiple models, how the central nervous system (CNS) controls this combination is not fully addressed. In this study, we utilize an Inverse Optimal Control (IOC) framework in order to find the combination of those internal models and how this combination changes for different reaching tasks. We conducted an experiment where participants executed a comprehensive set of free-space reaching motions. The results show that there is a trade-off between kinematics and dynamics based controllers depending on the reaching task. In addition, this trade-off depends on the initial and final arm configurations, which in turn affect the musculoskeletal load to be controlled. Given this insight, we further provide a discomfort metric to demonstrate its influence on the contribution of different inverse internal models. This formulation together with our analysis not only support the multiple internal models (MIMs) hypothesis but also suggest a hierarchical framework for the control of human reaching motions by the CNS.
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals
Czuba, Thaddeus B.; Cormack, Lawrence K.; Huk, Alexander C.
2016-01-01
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no “cross-cue” adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how—or indeed if—these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. PMID:27798134
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.
Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C
2016-10-19
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. Copyright © 2016 the authors 0270-6474/16/3610791-12$15.00/0.
2013-01-01
Background A longitudinal repeated measures design over pregnancy and post-birth, with a control group would provide insight into the mechanical adaptations of the body under conditions of changing load during a common female human lifespan condition, while minimizing the influences of inter human differences. The objective was to investigate systematic changes in the range of motion for the pelvic and thoracic segments of the spine, the motion between these segments (thoracolumbar spine) and temporospatial characteristics of step width, stride length and velocity during walking as pregnancy progresses and post-birth. Methods Nine pregnant women were investigated when walking along a walkway at a self-selected velocity using an 8 camera motion analysis system on four occasions throughout pregnancy and once post birth. A control group of twelve non-pregnant nulliparous women were tested on three occasions over the same time period. The existence of linear trends for change was investigated. Results As pregnancy progresses there was a significant linear trend for increase in step width (p = 0.05) and a significant linear trend for decrease in stride length (p = 0.05). Concurrently there was a significant linear trend for decrease in the range of motion of the pelvic segment (p = 0.03) and thoracolumbar spine (p = 0.01) about a vertical axis (side to side rotation), and the pelvic segment (p = 0.04) range of motion around an anterio-posterior axis (side tilt). Post-birth, step width readapted whereas pelvic (p = 0.02) and thoracic (p < 0.001) segment flexion-extension range of motion decreased and increased respectively. The magnitude of all changes was greater than that accounted for with natural variability with re testing. Conclusions As pregnancy progressed and post-birth there were significant linear trends seen in biomechanical changes when walking at a self-determined natural speed that were greater than that accounted for by natural variability with repeated testing. Not all adaptations were resolved by eight weeks post birth. PMID:23514204
Directional asymmetries in human smooth pursuit eye movements.
Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam
2013-06-27
Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.
NASA Astrophysics Data System (ADS)
Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji
Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.
The Default Mode Network Differentiates Biological From Non-Biological Motion.
Dayan, Eran; Sella, Irit; Mukovskiy, Albert; Douek, Yehonatan; Giese, Martin A; Malach, Rafael; Flash, Tamar
2016-01-01
The default mode network (DMN) has been implicated in an array of social-cognitive functions, including self-referential processing, theory of mind, and mentalizing. Yet, the properties of the external stimuli that elicit DMN activity in relation to these domains remain unknown. Previous studies suggested that motion kinematics is utilized by the brain for social-cognitive processing. Here, we used functional MRI to examine whether the DMN is sensitive to parametric manipulations of observed motion kinematics. Preferential responses within core DMN structures differentiating non-biological from biological kinematics were observed for the motion of a realistically looking, human-like avatar, but not for an abstract object devoid of human form. Differences in connectivity patterns during the observation of biological versus non-biological kinematics were additionally observed. Finally, the results additionally suggest that the DMN is coupled more strongly with key nodes in the action observation network, namely the STS and the SMA, when the observed motion depicts human rather than abstract form. These findings are the first to implicate the DMN in the perception of biological motion. They may reflect the type of information used by the DMN in social-cognitive processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Modeling heading and path perception from optic flow in the case of independently moving objects
Raudies, Florian; Neumann, Heiko
2013-01-01
Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589
Appearance-based human gesture recognition using multimodal features for human computer interaction
NASA Astrophysics Data System (ADS)
Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun
2011-03-01
The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.
Kan, Andrey; Tan, Yan-Hong; Angrisano, Fiona; Hanssen, Eric; Rogers, Kelly L; Whitehead, Lachlan; Mollard, Vanessa P; Cozijnsen, Anton; Delves, Michael J; Crawford, Simon; Sinden, Robert E; McFadden, Geoffrey I; Leckie, Christopher; Bailey, James; Baum, Jake
2014-05-01
Motility is a fundamental part of cellular life and survival, including for Plasmodium parasites--single-celled protozoan pathogens responsible for human malaria. The motile life cycle forms achieve motility, called gliding, via the activity of an internal actomyosin motor. Although gliding is based on the well-studied system of actin and myosin, its core biomechanics are not completely understood. Currently accepted models suggest it results from a specifically organized cellular motor that produces a rearward directional force. When linked to surface-bound adhesins, this force is passaged to the cell posterior, propelling the parasite forwards. Gliding motility is observed in all three life cycle stages of Plasmodium: sporozoites, merozoites and ookinetes. However, it is only the ookinetes--formed inside the midgut of infected mosquitoes--that display continuous gliding without the necessity of host cell entry. This makes them ideal candidates for invasion-free biomechanical analysis. Here we apply a plate-based imaging approach to study ookinete motion in three-dimensional (3D) space to understand Plasmodium cell motility and how movement facilitates midgut colonization. Using single-cell tracking and numerical analysis of parasite motion in 3D, our analysis demonstrates that ookinetes move with a conserved left-handed helical trajectory. Investigation of cell morphology suggests this trajectory may be based on the ookinete subpellicular cytoskeleton, with complementary whole and subcellular electron microscopy showing that, like their motion paths, ookinetes share a conserved left-handed corkscrew shape and underlying twisted microtubular architecture. Through comparisons of 3D movement between wild-type ookinetes and a cytoskeleton-knockout mutant we demonstrate that perturbation of cell shape changes motion from helical to broadly linear. Therefore, while the precise linkages between cellular architecture and actomyosin motor organization remain unknown, our analysis suggests that the molecular basis of cell shape may, in addition to motor force, be a key adaptive strategy for malaria parasite dissemination and, as such, transmission. © 2014 The Authors. Cellular Microbiology published by John Wiley & Sons Ltd.
Individualistic weight perception from motion on a slope
Zintus-art, K.; Shin, D.; Kambara, H.; Yoshimura, N.; Koike, Y.
2016-01-01
Perception of an object’s weight is linked to its form and motion. Studies have shown the relationship between weight perception and motion in horizontal and vertical environments to be universally identical across subjects during passive observation. Here we show a contradicting finding in that not all humans share the same motion-weight pairing. A virtual environment where participants control the steepness of a slope was used to investigate the relationship between sliding motion and weight perception. Our findings showed that distinct, albeit subjective, motion-weight relationships in perception could be identified for slope environments. These individualistic perceptions were found when changes in environmental parameters governing motion were introduced, specifically inclination and surface texture. Differences in environmental parameters, combined with individual factors such as experience, affected participants’ weight perception. This phenomenon may offer evidence of the central nervous system’s ability to choose and combine internal models based on information from the sensory system. The results also point toward the possibility of controlling human perception by presenting strong sensory cues to manipulate the mechanisms managing internal models. PMID:27174036
In-plane and out-of-plane motions of the human tympanic membrane
Khaleghi, Morteza; Cheng, Jeffrey Tao; Furlong, Cosme; Rosowski, John J.
2016-01-01
Computer-controlled digital holographic techniques are developed and used to measure shape and four-dimensional nano-scale displacements of the surface of the tympanic membrane (TM) in cadaveric human ears in response to tonal sounds. The combination of these measurements (shape and sound-induced motions) allows the calculation of the out-of-plane (perpendicular to the surface) and in-plane (tangential) motion components at over 1 000 000 points on the TM surface with a high-degree of accuracy and sensitivity. A general conclusion is that the in-plane motion components are 10–20 dB smaller than the out-of-plane motions. These conditions are most often compromised with higher-frequency sound stimuli where the overall displacements are smaller, or the spatial density of holographic fringes is higher, both of which increase the uncertainty of the measurements. The results are consistent with the TM acting as a Kirchhoff–Love's thin shell dominated by out-of-plane motion with little in-plane motion, at least with stimulus frequencies up to 8 kHz. PMID:26827009
Phase space reconstruction and estimation of the largest Lyapunov exponent for gait kinematic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Josiński, Henryk; Świtoński, Adam; Silesian University of Technology, Akademicka 16, 44-100 Gliwice
The authors describe an example of application of nonlinear time series analysis directed at identifying the presence of deterministic chaos in human motion data by means of the largest Lyapunov exponent. The method was previously verified on the basis of a time series constructed from the numerical solutions of both the Lorenz and the Rössler nonlinear dynamical systems.
NASA Technical Reports Server (NTRS)
Kirkpatrick, M.; Brye, R. G.
1974-01-01
A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ter-Pogossian, M.M.; Bergmann, S.R.; Sobel, B.E.
1982-12-01
The potential influence of physiological, periodic motions of the heart due to the cardiac cycle, the respiratory cycle, or both on quantitative image reconstruction by positron emission tomography (PET) has been largely neglected. To define their quantitative impact, cardiac PET was performed in 6 dogs after injection of /sup 11/C-palmitate under disparate conditions including: normal cardiac and respiration cycles and cardiac arrest with and without respiration. Although in vitro assay of myocardial samples demonstrated that palmitate uptake was homogeneous (coefficient of variation . 10.1%), analysis of the reconstructed images demonstrated significant heterogeneity of apparent cardiac distribution of radioactivity due tomore » both intrinsic cardiac and respiratory motion. Image degradation due to respiratory motion was demonstrated in a healthy human volunteer as well, in whom cardiac tomography was performed with Super PETT I during breath-holding and during normal breathing. The results indicate that quantitatively significant degradation of reconstructions of true tracer distribution occurs in cardiac PET due to both intrinsic cardiac and respiratory induced motion of the heart. They suggest that avoidance of or minimization of these influences can be accomplished by gating with respect to both the cardiac cycle and respiration or by employing brief scan times during breath-holding.« less
Miniature low-power inertial sensors: promising technology for implantable motion capture systems.
Lambrecht, Joris M; Kirsch, Robert F
2014-11-01
Inertial and magnetic sensors are valuable for untethered, self-contained human movement analysis. Very recently, complete integration of inertial sensors, magnetic sensors, and processing into single packages, has resulted in miniature, low power devices that could feasibly be employed in an implantable motion capture system. We developed a wearable sensor system based on a commercially available system-in-package inertial and magnetic sensor. We characterized the accuracy of the system in measuring 3-D orientation-with and without magnetometer-based heading compensation-relative to a research grade optical motion capture system. The root mean square error was less than 4° in dynamic and static conditions about all axes. Using four sensors, recording from seven degrees-of-freedom of the upper limb (shoulder, elbow, wrist) was demonstrated in one subject during reaching motions. Very high correlation and low error was found across all joints relative to the optical motion capture system. Findings were similar to previous publications using inertial sensors, but at a fraction of the power consumption and size of the sensors. Such ultra-small, low power sensors provide exciting new avenues for movement monitoring for various movement disorders, movement-based command interfaces for assistive devices, and implementation of kinematic feedback systems for assistive interventions like functional electrical stimulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Pan; School of Life Science, University of Science and Technology of China, Hefei, Anhui 230026; Xi, Zhaoyong
Research highlights: {yields} Chemical synthesis of {sup 15}N/{sup 19}F-trifluomethyl phenylalanine. {yields} Site-specific incorporation of {sup 15}N/{sup 19}F-trifluomethyl phenylalanine to SH3. {yields} Site-specific backbone and side chain chemical shift and relaxation analysis. {yields} Different internal motions at different sites of SH3 domain upon ligand binding. -- Abstract: SH3 is a ubiquitous domain mediating protein-protein interactions. Recent solution NMR structural studies have shown that a proline-rich peptide is capable of binding to the human vinexin SH3 domain. Here, an orthogonal amber tRNA/tRNA synthetase pair for {sup 15}N/{sup 19}F-trifluoromethyl-phenylalanine ({sup 15}N/{sup 19}F-tfmF) has been applied to achieve site-specific labeling of SH3 at threemore » different sites. One-dimensional solution NMR spectra of backbone amide ({sup 15}N){sup 1}H and side-chain {sup 19}F were obtained for SH3 with three different site-specific labels. Site-specific backbone amide ({sup 15}N){sup 1}H and side-chain {sup 19}F chemical shift and relaxation analysis of SH3 in the absence or presence of a peptide ligand demonstrated different internal motions upon ligand binding at the three different sites. This site-specific NMR analysis might be very useful for studying large-sized proteins or protein complexes.« less
Automated processing pipeline for neonatal diffusion MRI in the developing Human Connectome Project.
Bastiani, Matteo; Andersson, Jesper L R; Cordero-Grande, Lucilio; Murgasova, Maria; Hutter, Jana; Price, Anthony N; Makropoulos, Antonios; Fitzgibbon, Sean P; Hughes, Emer; Rueckert, Daniel; Victor, Suresh; Rutherford, Mary; Edwards, A David; Smith, Stephen M; Tournier, Jacques-Donald; Hajnal, Joseph V; Jbabdi, Saad; Sotiropoulos, Stamatios N
2018-05-28
The developing Human Connectome Project is set to create and make available to the scientific community a 4-dimensional map of functional and structural cerebral connectivity from 20 to 44 weeks post-menstrual age, to allow exploration of the genetic and environmental influences on brain development, and the relation between connectivity and neurocognitive function. A large set of multi-modal MRI data from fetuses and newborn infants is currently being acquired, along with genetic, clinical and developmental information. In this overview, we describe the neonatal diffusion MRI (dMRI) image processing pipeline and the structural connectivity aspect of the project. Neonatal dMRI data poses specific challenges, and standard analysis techniques used for adult data are not directly applicable. We have developed a processing pipeline that deals directly with neonatal-specific issues, such as severe motion and motion-related artefacts, small brain sizes, high brain water content and reduced anisotropy. This pipeline allows automated analysis of in-vivo dMRI data, probes tissue microstructure, reconstructs a number of major white matter tracts, and includes an automated quality control framework that identifies processing issues or inconsistencies. We here describe the pipeline and present an exemplar analysis of data from 140 infants imaged at 38-44 weeks post-menstrual age. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Development of a parametric kinematic model of the human hand and a novel robotic exoskeleton.
Burton, T M W; Vaidyanathan, R; Burgess, S C; Turton, A J; Melhuish, C
2011-01-01
This paper reports the integration of a kinematic model of the human hand during cylindrical grasping, with specific focus on the accurate mapping of thumb movement during grasping motions, and a novel, multi-degree-of-freedom assistive exoskeleton mechanism based on this model. The model includes thumb maximum hyper-extension for grasping large objects (~> 50 mm). The exoskeleton includes a novel four-bar mechanism designed to reproduce natural thumb opposition and a novel synchro-motion pulley mechanism for coordinated finger motion. A computer aided design environment is used to allow the exoskeleton to be rapidly customized to the hand dimensions of a specific patient. Trials comparing the kinematic model to observed data of hand movement show the model to be capable of mapping thumb and finger joint flexion angles during grasping motions. Simulations show the exoskeleton to be capable of reproducing the complex motion of the thumb to oppose the fingers during cylindrical and pinch grip motions. © 2011 IEEE
Human heart rate variability relation is unchanged during motion sickness
NASA Technical Reports Server (NTRS)
Mullen, T. J.; Berger, R. D.; Oman, C. M.; Cohen, R. J.
1998-01-01
In a study of 18 human subjects, we applied a new technique, estimation of the transfer function between instantaneous lung volume (ILV) and instantaneous heart rate (HR), to assess autonomic activity during motion sickness. Two control recordings of ILV and electrocardiogram (ECG) were made prior to the development of motion sickness. During the first, subjects were seated motionless, and during the second they were seated rotating sinusoidally about an earth vertical axis. Subjects then wore prism goggles that reverse the left-right visual field and performed manual tasks until they developed moderate motion sickness. Finally, ILV and ECG were recorded while subjects maintained a relatively constant level of sickness by intermittent eye closure during rotation with the goggles. Based on analyses of ILV to HR transfer functions from the three conditions, we were unable to demonstrate a change in autonomic control of heart rate due to rotation alone or due to motion sickness. These findings do not support the notion that moderate motion sickness is manifested as a generalized autonomic response.
Computational modeling and analysis of the hydrodynamics of human swimming
NASA Astrophysics Data System (ADS)
von Loebbecke, Alfred
Computational modeling and simulations are used to investigate the hydrodynamics of competitive human swimming. The simulations employ an immersed boundary (IB) solver that allows us to simulate viscous, incompressible, unsteady flow past complex, moving/deforming three-dimensional bodies on stationary Cartesian grids. This study focuses on the hydrodynamics of the "dolphin kick". Three female and two male Olympic level swimmers are used to develop kinematically accurate models of this stroke for the simulations. A simulation of a dolphin undergoing its natural swimming motion is also presented for comparison. CFD enables the calculation of flow variables throughout the domain and over the swimmer's body surface during the entire kick cycle. The feet are responsible for all thrust generation in the dolphin kick. Moreover, it is found that the down-kick (ventral position) produces more thrust than the up-kick. A quantity of interest to the swimming community is the drag of a swimmer in motion (active drag). Accurate estimates of this quantity have been difficult to obtain in experiments but are easily calculated with CFD simulations. Propulsive efficiencies of the human swimmers are found to be in the range of 11% to 30%. The dolphin simulation case has a much higher efficiency of 55%. Investigation of vortex structures in the wake indicate that the down-kick can produce a vortex ring with a jet of accelerated fluid flowing through its center. This vortex ring and the accompanying jet are the primary thrust generating mechanisms in the human dolphin kick. In an attempt to understand the propulsive mechanisms of surface strokes, we have also conducted a computational analysis of two different styles of arm-pulls in the backstroke and the front crawl. These simulations involve only the arm and no air-water interface is included. Two of the four strokes are specifically designed to take advantage of lift-based propulsion by undergoing lateral motions of the hand (sculling) and by orienting the palm obliquely to the flow. The focus of the current study is on quantifying the relative contributions of drag and lift to thrust production and use this as a basis for determining the relative effectiveness the stroke styles.