Computer modeling and simulation of human movement. Applications in sport and rehabilitation.
Neptune, R R
2000-05-01
Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.
Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements
NASA Astrophysics Data System (ADS)
Sato, Naoyuki; Yamaguchi, Yoko
Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.
NASA Astrophysics Data System (ADS)
Kajiwara, Yusuke; Murata, Hiroaki; Kimura, Haruhiko; Abe, Koji
As a communication support tool for cases of amyotrophic lateral sclerosis (ALS), researches on eye gaze human-computer interfaces have been active. However, since voluntary and involuntary eye movements cannot be distinguished in the interfaces, their performance is still not sufficient for practical use. This paper presents a high performance human-computer interface system which unites high quality recognitions of horizontal directional eye movements and voluntary blinks. The experimental results have shown that the number of incorrect inputs is decreased by 35.1% in an existing system which equips recognitions of horizontal and vertical directional eye movements in addition to voluntary blinks and character inputs are speeded up by 17.4% from the existing system.
Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.
Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C
2016-03-01
Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.
Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark
2007-12-01
To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.
Evaluation of an eye-pointer interaction device for human-computer interaction.
Cáceres, Enrique; Carrasco, Miguel; Ríos, Sebastián
2018-03-01
Advances in eye-tracking technology have led to better human-computer interaction, and involve controlling a computer without any kind of physical contact. This research describes the transformation of a commercial eye-tracker for use as an alternative peripheral device in human-computer interactions, implementing a pointer that only needs the eye movements of a user facing a computer screen, thus replacing the need to control the software by hand movements. The experiment was performed with 30 test individuals who used the prototype with a set of educational videogames. The results show that, although most of the test subjects would prefer a mouse to control the pointer, the prototype tested has an empirical precision similar to that of the mouse, either when trying to control its movements or when attempting to click on a point of the screen.
Ma, Yingliang; Paterson, Helena M; Pollick, Frank E
2006-02-01
We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.
Developing Educational Computer Animation Based on Human Personality Types
ERIC Educational Resources Information Center
Musa, Sajid; Ziatdinov, Rushan; Sozcu, Omer Faruk; Griffiths, Carol
2015-01-01
Computer animation in the past decade has become one of the most noticeable features of technology-based learning environments. By its definition, it refers to simulated motion pictures showing movement of drawn objects, and is often defined as the art in movement. Its educational application known as educational computer animation is considered…
Stanley, James; Gowen, Emma; Miall, R. Christopher
2010-01-01
Behavioural studies suggest that the processing of movement stimuli is influenced by beliefs about the agency behind these actions. The current study examined how activity in social and action related brain areas differs when participants were instructed that identical movement stimuli were either human or computer generated. Participants viewed a series of point-light animation figures derived from motion-capture recordings of a moving actor, while functional magnetic resonance imaging (fMRI) was used to monitor patterns of neural activity. The stimuli were scrambled to produce a range of stimulus realism categories; furthermore, before each trial participants were told that they were about to view either a recording of human movement or a computer-simulated pattern of movement. Behavioural results suggested that agency instructions influenced participants' perceptions of the stimuli. The fMRI analysis indicated different functions within the paracingulate cortex: ventral paracingulate cortex was more active for human compared to computer agency instructed trials across all stimulus types, whereas dorsal paracingulate cortex was activated more highly in conflicting conditions (human instruction, low realism or vice versa). These findings support the hypothesis that ventral paracingulate encodes stimuli deemed to be of human origin, whereas dorsal paracingulate cortex is involved more in the ascertainment of human or intentional agency during the observation of ambiguous stimuli. Our results highlight the importance of prior instructions or beliefs on movement processing and the role of the paracingulate cortex in integrating prior knowledge with bottom-up stimuli. PMID:20398769
Mala, S.; Latha, K.
2014-01-01
Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185
Mala, S; Latha, K
2014-01-01
Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.
Distinct timing mechanisms produce discrete and continuous movements.
Huys, Raoul; Studenka, Breanna E; Rheaume, Nicole L; Zelaznik, Howard N; Jirsa, Viktor K
2008-04-25
The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems) to accomplish varying behavioral functions such as speed constraints.
Iáñez, Eduardo; Azorin, Jose M.; Perez-Vidal, Carlos
2013-01-01
This paper describes a human-computer interface based on electro-oculography (EOG) that allows interaction with a computer using eye movement. The EOG registers the movement of the eye by measuring, through electrodes, the difference of potential between the cornea and the retina. A new pair of EOG glasses have been designed to improve the user's comfort and to remove the manual procedure of placing the EOG electrodes around the user's eye. The interface, which includes the EOG electrodes, uses a new processing algorithm that is able to detect the gaze direction and the blink of the eyes from the EOG signals. The system reliably enabled subjects to control the movement of a dot on a video screen. PMID:23843986
Gertz, Hanna; Hilger, Maximilian; Hegele, Mathias; Fiehler, Katja
2016-09-01
Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception-action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial-frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Soh, Ahmad Afiq Sabqi Awang; Jafri, Mohd Zubir Mat; Azraai, Nur Zaidi
2015-04-01
The Interest in this studies of human kinematics goes back very far in human history drove by curiosity or need for the understanding the complexity of human body motion. To find new and accurate information about the human movement as the advance computing technology became available for human movement that can perform. Martial arts (silat) were chose and multiple type of movement was studied. This project has done by using cutting-edge technology which is 3D motion capture to characterize and to measure the motion done by the performers of martial arts (silat). The camera will detect the markers (infrared reflection by the marker) around the performer body (total of 24 markers) and will show as dot in the computer software. The markers detected were analyzing using kinematic kinetic approach and time as reference. A graph of velocity, acceleration and position at time,t (seconds) of each marker was plot. Then from the information obtain, more parameters were determined such as work done, momentum, center of mass of a body using mathematical approach. This data can be used for development of the effectiveness movement in martial arts which is contributed to the people in arts. More future works can be implemented from this project such as analysis of a martial arts competition.
Biomechanical behavior of muscle-tendon complex during dynamic human movements.
Fukashiro, Senshi; Hay, Dean C; Nagano, Akinori
2006-05-01
This paper reviews the research findings regarding the force and length changes of the muscle-tendon complex during dynamic human movements, especially those using ultrasonography and computer simulation. The use of ultrasonography demonstrated that the tendinous structures of the muscle-tendon complex are compliant enough to influence the biomechanical behavior (length change, shortening velocity, and so on) of fascicles substantially. It was discussed that the fascicles are a force generator rather than a work generator; the tendinous structures function not only as an energy re-distributor but also as a power amplifier, and the interaction between fascicles and tendinous structures is essential for generating higher joint power outputs during the late pushoff phase in human vertical jumping. This phenomenon could be explained based on the force-length/velocity relationships of each element (contractile and series elastic elements) in the muscle-tendon complex during movements. Through computer simulation using a Hill-type muscle-tendon complex model, the benefit of making a countermovement was examined in relation to the compliance of the muscle-tendon complex and the length ratio between the contractile and series elastic elements. Also, the integral roles of the series elastic element were simulated in a cyclic human heel-raise exercise. It was suggested that the storage and reutilization of elastic energy by the tendinous structures play an important role in enhancing work output and movement efficiency in many sorts of human movements.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
NASA Astrophysics Data System (ADS)
Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.
2009-10-01
A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.
Eizicovits, Danny; Edan, Yael; Tabak, Iris; Levy-Tzedek, Shelly
2018-01-01
Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. The timing of the participants' movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group's motivation to continue playing. We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement - an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.
A computer-aided movement analysis system.
Fioretti, S; Leo, T; Pisani, E; Corradini, M L
1990-08-01
Interaction with biomechanical data concerning human movement analysis implies the adoption of various experimental equipments and the choice of suitable models, data processing, and graphical data restitution techniques. The integration of measurement setups with the associated experimental protocols and the relative software procedures constitutes a computer-aided movement analysis (CAMA) system. In the present paper such integration is mapped onto the causes that limit the clinical acceptance of movement analysis methods. The structure of the system is presented. A specific CAMA system devoted to posture analysis is described in order to show the attainable features. Scientific results obtained with the support of the described system are also reported.
Leadership in moving human groups.
Boos, Margarete; Pritz, Johannes; Lange, Simon; Belz, Michael
2014-04-01
How is movement of individuals coordinated as a group? This is a fundamental question of social behaviour, encompassing phenomena such as bird flocking, fish schooling, and the innumerable activities in human groups that require people to synchronise their actions. We have developed an experimental paradigm, the HoneyComb computer-based multi-client game, to empirically investigate human movement coordination and leadership. Using economic games as a model, we set monetary incentives to motivate players on a virtual playfield to reach goals via players' movements. We asked whether (I) humans coordinate their movements when information is limited to an individual group member's observation of adjacent group member motion, (II) whether an informed group minority can lead an uninformed group majority to the minority's goal, and if so, (III) how this minority exerts its influence. We showed that in a human group--on the basis of movement alone--a minority can successfully lead a majority. Minorities lead successfully when (a) their members choose similar initial steps towards their goal field and (b) they are among the first in the whole group to make a move. Using our approach, we empirically demonstrate that the rules of swarming behaviour apply to humans. Even complex human behaviour, such as leadership and directed group movement, follow simple rules that are based on visual perception of local movement.
A Novel Computer-Based Set-Up to Study Movement Coordination in Human Ensembles
Alderisio, Francesco; Lombardi, Maria; Fiore, Gianfranco; di Bernardo, Mario
2017-01-01
Existing experimental works on movement coordination in human ensembles mostly investigate situations where each subject is connected to all the others through direct visual and auditory coupling, so that unavoidable social interaction affects their coordination level. Here, we present a novel computer-based set-up to study movement coordination in human groups so as to minimize the influence of social interaction among participants and implement different visual pairings between them. In so doing, players can only take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players in the group and their own dynamics. In addition, our set-up enables the deployment of virtual computer players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We show how this novel set-up can be employed to study coordination both in dyads and in groups over different structures of interconnections, in the presence as well as in the absence of virtual agents acting as followers or leaders. Finally, in order to illustrate the capabilities of the architecture, we describe some preliminary results. The platform is available to any researcher who wishes to unfold the mechanisms underlying group synchronization in human ensembles and shed light on its socio-psychological aspects. PMID:28649217
NASA Technical Reports Server (NTRS)
Badler, N. I.; Fishwick, P.; Taft, N.; Agrawala, M.
1985-01-01
The use of computer graphics to simulate the movement of articulated animals and mechanisms has a number of uses ranging over many fields. Human motion simulation systems can be useful in education, medicine, anatomy, physiology, and dance. In biomechanics, computer displays help to understand and analyze performance. Simulations can be used to help understand the effect of external or internal forces. Similarly, zero-gravity simulation systems should provide a means of designing and exploring the capabilities of hypothetical zero-gravity situations before actually carrying out such actions. The advantage of using a simulation of the motion is that one can experiment with variations of a maneuver before attempting to teach it to an individual. The zero-gravity motion simulation problem can be divided into two broad areas: human movement and behavior in zero-gravity, and simulation of articulated mechanisms.
Trends in Human-Computer Interaction to Support Future Intelligence Analysis Capabilities
2011-06-01
that allows data to be moved between different computing systems and displays. Figure 4- G-Speak gesture interaction (Oblong, 2011) 5.2 Multitouch ... Multitouch refers to a touchscreen interaction technique in which multiple simultaneous touchpoints and movements can be detected and used to...much of the style of interaction (such as rotate, pinch, zoom and flick movements) found in multitouch devices but can typically recognize more than
Zörner, S.; Kaltenbacher, M.; Döllinger, M.
2013-01-01
In a partitioned approach for computational fluid–structure interaction (FSI) the coupling between fluid and structure causes substantial computational resources. Therefore, a convenient alternative is to reduce the problem to a pure flow simulation with preset movement and applying appropriate boundary conditions. This work investigates the impact of replacing the fully-coupled interface condition with a one-way coupling. To continue to capture structural movement and its effect onto the flow field, prescribed wall movements from separate simulations and/or measurements are used. As an appropriate test case, we apply the different coupling strategies to the human phonation process, which is a highly complex interaction of airflow through the larynx and structural vibration of the vocal folds (VF). We obtain vocal fold vibrations from a fully-coupled simulation and use them as input data for the simplified simulation, i.e. just solving the fluid flow. All computations are performed with our research code CFS++, which is based on the finite element (FE) method. The presented results show that a pure fluid simulation with prescribed structural movement can substitute the fully-coupled approach. However, caution must be used to ensure accurate boundary conditions on the interface, and we found that only a pressure driven flow correctly responds to the physical effects when using specified motion. PMID:24204083
NASA Astrophysics Data System (ADS)
Milekovic, Tomislav; Fischer, Jörg; Pistohl, Tobias; Ruescher, Johanna; Schulze-Bonhage, Andreas; Aertsen, Ad; Rickert, Jörn; Ball, Tonio; Mehring, Carsten
2012-08-01
A brain-machine interface (BMI) can be used to control movements of an artificial effector, e.g. movements of an arm prosthesis, by motor cortical signals that control the equivalent movements of the corresponding body part, e.g. arm movements. This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single neurons. We show that the same approach can be realized using brain activity measured directly from the surface of the human cortex using electrocorticography (ECoG). Five subjects, implanted with ECoG implants for the purpose of epilepsy assessment, took part in our study. Subjects used directionally dependent ECoG signals, recorded during active movements of a single arm, to control a computer cursor in one out of two directions. Significant BMI control was achieved in four out of five subjects with correct directional decoding in 69%-86% of the trials (75% on average). Our results demonstrate the feasibility of an online BMI using decoding of movement direction from human ECoG signals. Thus, to achieve such BMIs, ECoG signals might be used in conjunction with or as an alternative to intracortical neural signals.
NASA Astrophysics Data System (ADS)
Cheok, Adrian David
This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Simplified model of mean double step (MDS) in human body movement
NASA Astrophysics Data System (ADS)
Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; Mugarra González, C. Fernando
In this paper we present a simplified and useful model of the human body movement based on the full gait cycle description, called the Mean Double Step (MDS). It enables the parameterization and simplification of the human movement. Furthermore it allows a description of the gait cycle by providing standardized estimators to transform the gait cycle into a periodical movement process. Moreover the method of simplifying the MDS model and its compression are demonstrated. The simplification is achieved by reducing the number of bars of the spectrum and I or by reducing the number of samples describing the MDS both in terms of reducing their computational burden and their resources for the data storage. Our MDS model, which is applicable to the gait cycle method for examining patients, is non-invasive and provides the additional advantage of featuring a functional characterization of the relative or absolute movement of any part of the body.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Assisting Movement Training and Execution With Visual and Haptic Feedback.
Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme
2018-01-01
In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.
Banerjee, Jayeeta; Majumdar, Dhurjati; Majumdar, Deepti; Pal, Madhu Sudan
2010-06-01
We are experiencing a shifting of media: from the printed paper to the computer screen. This transition is modifying the process of how we read and understand a text. It is very difficult to conclude on suitability of font characters based upon subjective evaluation method only. Present study evaluates the effect of font type on human cognitive workload during perception of individual alphabets on a computer screen. Twenty six young subjects volunteered for this study. Here, subjects have been shown individual characters of different font types and their eye movements have been recorded. A binocular eye movement recorder was used for eye movement recording. The results showed that different eye movement parameters such as pupil diameter, number of fixations, fixation duration were less for font type Verdana. The present study recommends the use of font type Verdana for presentation of individual alphabets on various electronic displays in order to reduce cognitive workload.
A computer vision-based system for monitoring Vojta therapy.
Khan, Muhammad Hassan; Helsper, Julien; Farid, Muhammad Shahid; Grzegorzek, Marcin
2018-05-01
A neurological illness is t he disorder in human nervous system that can result in various diseases including the motor disabilities. Neurological disorders may affect the motor neurons, which are associated with skeletal muscles and control the body movement. Consequently, they introduce some diseases in the human e.g. cerebral palsy, spinal scoliosis, peripheral paralysis of arms/legs, hip joint dysplasia and various myopathies. Vojta therapy is considered a useful technique to treat the motor disabilities. In Vojta therapy, a specific stimulation is given to the patient's body to perform certain reflexive pattern movements which the patient is unable to perform in a normal manner. The repetition of stimulation ultimately brings forth the previously blocked connections between the spinal cord and the brain. After few therapy sessions, the patient can perform these movements without external stimulation. In this paper, we propose a computer vision-based system to monitor the correct movements of the patient during the therapy treatment using the RGBD data. The proposed framework works in three steps. In the first step, patient's body is automatically detected and segmented and two novel techniques are proposed for this purpose. In the second step, a multi-dimensional feature vector is computed to define various movements of patient's body during the therapy. In the final step, a multi-class support vector machine is used to classify these movements. The experimental evaluation carried out on the large captured dataset shows that the proposed system is highly useful in monitoring the patient's body movements during Vojta therapy. Copyright © 2018 Elsevier B.V. All rights reserved.
Real time eye tracking using Kalman extended spatio-temporal context learning
NASA Astrophysics Data System (ADS)
Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu
2017-06-01
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
A Theory of Eye Movements during Target Acquisition
ERIC Educational Resources Information Center
Zelinsky, Gregory J.
2008-01-01
The gaze movements accompanying target localization were examined via human observers and a computational model (target acquisition model [TAM]). Search contexts ranged from fully realistic scenes to toys in a crib to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model…
A square root ensemble Kalman filter application to a motor-imagery brain-computer interface.
Kamrunnahar, M; Schiff, S J
2011-01-01
We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%-90% for the hand movements and 70%-90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models.
EYE MOVEMENT RECORDING AND NONLINEAR DYNAMICS ANALYSIS – THE CASE OF SACCADES#
Aştefănoaei, Corina; Pretegiani, Elena; Optican, L.M.; Creangă, Dorina; Rufa, Alessandra
2015-01-01
Evidence of a chaotic behavioral trend in eye movement dynamics was examined in the case of a saccadic temporal series collected from a healthy human subject. Saccades are highvelocity eye movements of very short duration, their recording being relatively accessible, so that the resulting data series could be studied computationally for understanding the neural processing in a motor system. The aim of this study was to assess the complexity degree in the eye movement dynamics. To do this we analyzed the saccadic temporal series recorded with an infrared camera eye tracker from a healthy human subject in a special experimental arrangement which provides continuous records of eye position, both saccades (eye shifting movements) and fixations (focusing over regions of interest, with rapid, small fluctuations). The semi-quantitative approach used in this paper in studying the eye functioning from the viewpoint of non-linear dynamics was accomplished by some computational tests (power spectrum, portrait in the state space and its fractal dimension, Hurst exponent and largest Lyapunov exponent) derived from chaos theory. A high complexity dynamical trend was found. Lyapunov largest exponent test suggested bi-stability of cellular membrane resting potential during saccadic experiment. PMID:25698889
Wang, W; Degenhart, A D; Collinger, J L; Vinjamuri, R; Sudre, G P; Adelson, P D; Holder, D L; Leuthardt, E C; Moran, D W; Boninger, M L; Schwartz, A B; Crammond, D J; Tyler-Kabara, E C; Weber, D J
2009-01-01
In this study human motor cortical activity was recorded with a customized micro-ECoG grid during individual finger movements. The quality of the recorded neural signals was characterized in the frequency domain from three different perspectives: (1) coherence between neural signals recorded from different electrodes, (2) modulation of neural signals by finger movement, and (3) accuracy of finger movement decoding. It was found that, for the high frequency band (60-120 Hz), coherence between neighboring micro-ECoG electrodes was 0.3. In addition, the high frequency band showed significant modulation by finger movement both temporally and spatially, and a classification accuracy of 73% (chance level: 20%) was achieved for individual finger movement using neural signals recorded from the micro-ECoG grid. These results suggest that the micro-ECoG grid presented here offers sufficient spatial and temporal resolution for the development of minimally-invasive brain-computer interface applications.
NASA Astrophysics Data System (ADS)
Tsuruoka, Masako; Shibasaki, Ryosuke; Box, Elgene O.; Murai, Shunji; Mori, Eiji; Wada, Takao; Kurita, Masahiro; Iritani, Makoto; Kuroki, Yoshikatsu
1994-08-01
In medical rehabilitation science, quantitative understanding of patient movement in 3-D space is very important. The patient with any joint disorder will experience its influence on other body parts in daily movement. The alignment of joints in movement is able to improve under medical therapy process. In this study, the newly developed system is composed of two non- metri CCD video cameras and a force plate sensor, which are controlled simultaneously by a personal computer. By this system time-series digital data from 3-D image photogrammetry, each foot pressure and its center position, is able to provide efficient information for biomechanical and mathematical analysis of human movement. Each specific and common points are indicated in any patient movement. This study suggests more various, quantitative understanding in medical rehabilitation science.
Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng
2013-08-01
Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.
Eye Tracking and Head Movement Detection: A State-of-Art Survey
2013-01-01
Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851
A square root ensemble Kalman filter application to a motor-imagery brain-computer interface
Kamrunnahar, M.; Schiff, S. J.
2017-01-01
We here investigated a non-linear ensemble Kalman filter (SPKF) application to a motor imagery brain computer interface (BCI). A square root central difference Kalman filter (SR-CDKF) was used as an approach for brain state estimation in motor imagery task performance, using scalp electroencephalography (EEG) signals. Healthy human subjects imagined left vs. right hand movements and tongue vs. bilateral toe movements while scalp EEG signals were recorded. Offline data analysis was conducted for training the model as well as for decoding the imagery movements. Preliminary results indicate the feasibility of this approach with a decoding accuracy of 78%–90% for the hand movements and 70%–90% for the tongue-toes movements. Ongoing research includes online BCI applications of this approach as well as combined state and parameter estimation using this algorithm with different system dynamic models. PMID:22255799
Eye/Brain/Task Testbed And Software
NASA Technical Reports Server (NTRS)
Janiszewski, Thomas; Mainland, Nora; Roden, Joseph C.; Rothenheber, Edward H.; Ryan, Arthur M.; Stokes, James M.
1994-01-01
Eye/brain/task (EBT) testbed records electroencephalograms, movements of eyes, and structures of tasks to provide comprehensive data on neurophysiological experiments. Intended to serve continuing effort to develop means for interactions between human brain waves and computers. Software library associated with testbed provides capabilities to recall collected data, to process data on movements of eyes, to correlate eye-movement data with electroencephalographic data, and to present data graphically. Cognitive processes investigated in ways not previously possible.
Design of a Virtual Player for Joint Improvisation with Humans in the Mirror Game
Zhai, Chao; Alderisio, Francesco; Tsaneva-Atanasova, Krasimira; di Bernardo, Mario
2016-01-01
Joint improvisation is often observed among humans performing joint action tasks. Exploring the underlying cognitive and neural mechanisms behind the emergence of joint improvisation is an open research challenge. This paper investigates jointly improvised movements between two participants in the mirror game, a paradigmatic joint task example. First, experiments involving movement coordination of different dyads of human players are performed in order to build a human benchmark. No designation of leader and follower is given beforehand. We find that joint improvisation is characterized by the lack of a leader and high levels of movement synchronization. Then, a theoretical model is proposed to capture some features of their interaction, and a set of experiments is carried out to test and validate the model ability to reproduce the experimental observations. Furthermore, the model is used to drive a computer avatar able to successfully improvise joint motion with a human participant in real time. Finally, a convergence analysis of the proposed model is carried out to confirm its ability to reproduce joint movements between the participants. PMID:27123927
Design of a Virtual Player for Joint Improvisation with Humans in the Mirror Game.
Zhai, Chao; Alderisio, Francesco; Słowiński, Piotr; Tsaneva-Atanasova, Krasimira; di Bernardo, Mario
2016-01-01
Joint improvisation is often observed among humans performing joint action tasks. Exploring the underlying cognitive and neural mechanisms behind the emergence of joint improvisation is an open research challenge. This paper investigates jointly improvised movements between two participants in the mirror game, a paradigmatic joint task example. First, experiments involving movement coordination of different dyads of human players are performed in order to build a human benchmark. No designation of leader and follower is given beforehand. We find that joint improvisation is characterized by the lack of a leader and high levels of movement synchronization. Then, a theoretical model is proposed to capture some features of their interaction, and a set of experiments is carried out to test and validate the model ability to reproduce the experimental observations. Furthermore, the model is used to drive a computer avatar able to successfully improvise joint motion with a human participant in real time. Finally, a convergence analysis of the proposed model is carried out to confirm its ability to reproduce joint movements between the participants.
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
Implicit prosody mining based on the human eye image capture technology
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2013-08-01
The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.
Vicary, Staci; Sperling, Matthias; von Zimmermann, Jorina; Richardson, Daniel C; Orgs, Guido
2017-01-01
Synchronized movement is a ubiquitous feature of dance and music performance. Much research into the evolutionary origins of these cultural practices has focused on why humans perform rather than watch or listen to dance and music. In this study, we show that movement synchrony among a group of performers predicts the aesthetic appreciation of live dance performances. We developed a choreography that continuously manipulated group synchronization using a defined movement vocabulary based on arm swinging, walking and running. The choreography was performed live to four audiences, as we continuously tracked the performers' movements, and the spectators' affective responses. We computed dynamic synchrony among performers using cross recurrence analysis of data from wrist accelerometers, and implicit measures of arousal from spectators' heart rates. Additionally, a subset of spectators provided continuous ratings of enjoyment and perceived synchrony using tablet computers. Granger causality analyses demonstrate predictive relationships between synchrony, enjoyment ratings and spectator arousal, if audiences form a collectively consistent positive or negative aesthetic evaluation. Controlling for the influence of overall movement acceleration and visual change, we show that dance communicates group coordination via coupled movement dynamics among a group of performers. Our findings are in line with an evolutionary function of dance-and perhaps all performing arts-in transmitting social signals between groups of people. Human movement is the common denominator of dance, music and theatre. Acknowledging the time-sensitive and immediate nature of the performer-spectator relationship, our study makes a significant step towards an aesthetics of joint actions in the performing arts.
Vicary, Staci; Sperling, Matthias; von Zimmermann, Jorina; Richardson, Daniel C.
2017-01-01
Synchronized movement is a ubiquitous feature of dance and music performance. Much research into the evolutionary origins of these cultural practices has focused on why humans perform rather than watch or listen to dance and music. In this study, we show that movement synchrony among a group of performers predicts the aesthetic appreciation of live dance performances. We developed a choreography that continuously manipulated group synchronization using a defined movement vocabulary based on arm swinging, walking and running. The choreography was performed live to four audiences, as we continuously tracked the performers’ movements, and the spectators’ affective responses. We computed dynamic synchrony among performers using cross recurrence analysis of data from wrist accelerometers, and implicit measures of arousal from spectators’ heart rates. Additionally, a subset of spectators provided continuous ratings of enjoyment and perceived synchrony using tablet computers. Granger causality analyses demonstrate predictive relationships between synchrony, enjoyment ratings and spectator arousal, if audiences form a collectively consistent positive or negative aesthetic evaluation. Controlling for the influence of overall movement acceleration and visual change, we show that dance communicates group coordination via coupled movement dynamics among a group of performers. Our findings are in line with an evolutionary function of dance–and perhaps all performing arts–in transmitting social signals between groups of people. Human movement is the common denominator of dance, music and theatre. Acknowledging the time-sensitive and immediate nature of the performer-spectator relationship, our study makes a significant step towards an aesthetics of joint actions in the performing arts. PMID:28742849
Movement Coordination during Conversation
Latif, Nida; Barbosa, Adriano V.; Vatiokiotis-Bateson, Eric; Castelhano, Monica S.; Munhall, K. G.
2014-01-01
Behavioral coordination and synchrony contribute to a common biological mechanism that maintains communication, cooperation and bonding within many social species, such as primates and birds. Similarly, human language and social systems may also be attuned to coordination to facilitate communication and the formation of relationships. Gross similarities in movement patterns and convergence in the acoustic properties of speech have already been demonstrated between interacting individuals. In the present studies, we investigated how coordinated movements contribute to observers’ perception of affiliation (friends vs. strangers) between two conversing individuals. We used novel computational methods to quantify motor coordination and demonstrated that individuals familiar with each other coordinated their movements more frequently. Observers used coordination to judge affiliation between conversing pairs but only when the perceptual stimuli were restricted to head and face regions. These results suggest that observed movement coordination in humans might contribute to perceptual decisions based on availability of information to perceivers. PMID:25119189
A Computer Graphics Human Figure Application Of Biostereometrics
NASA Astrophysics Data System (ADS)
Fetter, William A.
1980-07-01
A study of improved computer graphic representation of the human figure is being conducted under a National Science Foundation grant. Special emphasis is given biostereometrics as a primary data base from which applications requiring a variety of levels of detail may be prepared. For example, a human figure represented by a single point can be very useful in overview plots of a population. A crude ten point figure can be adequate for queuing theory studies and simulated movement of groups. A one hundred point figure can usefully be animated to achieve different overall body activities including male and female figures. A one thousand point figure si-milarly animated, begins to be useful in anthropometrics and kinesiology gross body movements. Extrapolations of this order-of-magnitude approach ultimately should achieve very complex data bases and a program which automatically selects the correct level of detail for the task at hand. See Summary Figure 1.
Eye Movements During Everyday Behavior Predict Personality Traits.
Hoppe, Sabrina; Loetscher, Tobias; Morey, Stephanie A; Bulling, Andreas
2018-01-01
Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements. Further analysis revealed new relations between previously neglected eye movement characteristics and personality. Our findings demonstrate a considerable influence of personality on everyday eye movement control, thereby complementing earlier studies in laboratory settings. Improving automatic recognition and interpretation of human social signals is an important endeavor, enabling innovative design of human-computer systems capable of sensing spontaneous natural user behavior to facilitate efficient interaction and personalization.
A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces
Heo, Jeong; Yoon, Heenam; Park, Kwang Suk
2017-01-01
Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain–computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles. PMID:28644398
A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces.
Heo, Jeong; Yoon, Heenam; Park, Kwang Suk
2017-06-23
Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain-computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles.
Making IBM's Computer, Watson, Human
Rachlin, Howard
2012-01-01
This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial “self,” are all incidental to its humanity. PMID:22942530
Making IBM's Computer, Watson, Human.
Rachlin, Howard
2012-01-01
This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial "self," are all incidental to its humanity.
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Shen, Guohua; Zhang, Jing; Wang, Mengxing; Lei, Du; Yang, Guang; Zhang, Shanmin; Du, Xiaoxia
2014-06-01
Multivariate pattern classification analysis (MVPA) has been applied to functional magnetic resonance imaging (fMRI) data to decode brain states from spatially distributed activation patterns. Decoding upper limb movements from non-invasively recorded human brain activation is crucial for implementing a brain-machine interface that directly harnesses an individual's thoughts to control external devices or computers. The aim of this study was to decode the individual finger movements from fMRI single-trial data. Thirteen healthy human subjects participated in a visually cued delayed finger movement task, and only one slight button press was performed in each trial. Using MVPA, the decoding accuracy (DA) was computed separately for the different motor-related regions of interest. For the construction of feature vectors, the feature vectors from two successive volumes in the image series for a trial were concatenated. With these spatial-temporal feature vectors, we obtained a 63.1% average DA (84.7% for the best subject) for the contralateral primary somatosensory cortex and a 46.0% average DA (71.0% for the best subject) for the contralateral primary motor cortex; both of these values were significantly above the chance level (20%). In addition, we implemented searchlight MVPA to search for informative regions in an unbiased manner across the whole brain. Furthermore, by applying searchlight MVPA to each volume of a trial, we visually demonstrated the information for decoding, both spatially and temporally. The results suggest that the non-invasive fMRI technique may provide informative features for decoding individual finger movements and the potential of developing an fMRI-based brain-machine interface for finger movement. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Alderton, Simon; Noble, Jason; Schaten, Kathrin; Welburn, Susan C; Atkinson, Peter M
2015-01-01
In this research, an agent-based model (ABM) was developed to generate human movement routes between homes and water resources in a rural setting, given commonly available geospatial datasets on population distribution, land cover and landscape resources. ABMs are an object-oriented computational approach to modelling a system, focusing on the interactions of autonomous agents, and aiming to assess the impact of these agents and their interactions on the system as a whole. An A* pathfinding algorithm was implemented to produce walking routes, given data on the terrain in the area. A* is an extension of Dijkstra's algorithm with an enhanced time performance through the use of heuristics. In this example, it was possible to impute daily activity movement patterns to the water resource for all villages in a 75 km long study transect across the Luangwa Valley, Zambia, and the simulated human movements were statistically similar to empirical observations on travel times to the water resource (Chi-squared, 95% confidence interval). This indicates that it is possible to produce realistic data regarding human movements without costly measurement as is commonly achieved, for example, through GPS, or retrospective or real-time diaries. The approach is transferable between different geographical locations, and the product can be useful in providing an insight into human movement patterns, and therefore has use in many human exposure-related applications, specifically epidemiological research in rural areas, where spatial heterogeneity in the disease landscape, and space-time proximity of individuals, can play a crucial role in disease spread.
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.
Lee, Leng-Feng; Umberger, Brian R
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB
Lee, Leng-Feng
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184
Eye-movements and Voice as Interface Modalities to Computer Systems
NASA Astrophysics Data System (ADS)
Farid, Mohsen M.; Murtagh, Fionn D.
2003-03-01
We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.
Automatic prediction of tongue muscle activations using a finite element model.
Stavness, Ian; Lloyd, John E; Fels, Sidney
2012-11-15
Computational modeling has improved our understanding of how muscle forces are coordinated to generate movement in musculoskeletal systems. Muscular-hydrostat systems, such as the human tongue, involve very different biomechanics than musculoskeletal systems, and modeling efforts to date have been limited by the high computational complexity of representing continuum-mechanics. In this study, we developed a computationally efficient tracking-based algorithm for prediction of muscle activations during dynamic 3D finite element simulations. The formulation uses a local quadratic-programming problem at each simulation time-step to find a set of muscle activations that generated target deformations and movements in finite element muscular-hydrostat models. We applied the technique to a 3D finite element tongue model for protrusive and bending movements. Predicted muscle activations were consistent with experimental recordings of tongue strain and electromyography. Upward tongue bending was achieved by recruitment of the superior longitudinal sheath muscle, which is consistent with muscular-hydrostat theory. Lateral tongue bending, however, required recruitment of contralateral transverse and vertical muscles in addition to the ipsilateral margins of the superior longitudinal muscle, which is a new proposition for tongue muscle coordination. Our simulation framework provides a new computational tool for systematic analysis of muscle forces in continuum-mechanics models that is complementary to experimental data and shows promise for eliciting a deeper understanding of human tongue function. Copyright © 2012 Elsevier Ltd. All rights reserved.
Feature Interactions Enable Decoding of Sensorimotor Transformations for Goal-Directed Movement
Barany, Deborah A.; Della-Maggiore, Valeria; Viswanathan, Shivakumar; Cieslak, Matthew
2014-01-01
Neurophysiology and neuroimaging evidence shows that the brain represents multiple environmental and body-related features to compute transformations from sensory input to motor output. However, it is unclear how these features interact during goal-directed movement. To investigate this issue, we examined the representations of sensory and motor features of human hand movements within the left-hemisphere motor network. In a rapid event-related fMRI design, we measured cortical activity as participants performed right-handed movements at the wrist, with either of two postures and two amplitudes, to move a cursor to targets at different locations. Using a multivoxel analysis technique with rigorous generalization tests, we reliably distinguished representations of task-related features (primarily target location, movement direction, and posture) in multiple regions. In particular, we identified an interaction between target location and movement direction in the superior parietal lobule, which may underlie a transformation from the location of the target in space to a movement vector. In addition, we found an influence of posture on primary motor, premotor, and parietal regions. Together, these results reveal the complex interactions between different sensory and motor features that drive the computation of sensorimotor transformations. PMID:24828640
NASA Astrophysics Data System (ADS)
Biess, Armin
2013-01-01
The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.
Modelling and monitoring of passive control structures in human movement
NASA Astrophysics Data System (ADS)
Hemami, Hooshang; Hemami, Mahmoud
2014-09-01
Passive tissues, ligaments and cartilage are vital to human movement. Their contribution to stability, joint function and joint integrity is essential. The articulation of their functions and quantitative assessment of what they do in a healthy or injured state are important in athletics, orthopaedics, medicine and health. In this paper, the role of cartilage and ligaments in stability of natural contacts, connections and joints is articulated by including them in two very simple skeletal systems: one- and three-link rigid body systems. Based on the Newton-Euler equations, a state space presentation of the dynamics is discussed that allows inclusion of ligament and cartilage structures in the model, and allows for Lyapunov stability studies for the original and reduced systems. The connection constraints may be holonomic and non-holonomic depending on the structure of the passive elements. The development is pertinent to the eventual design of a computational framework for the study of human movement that involves computer models of all the relevant skeletal, neural and physiological elements of the central nervous system (CNS). Such a structure also permits testing of different hypotheses about the functional neuroanatomy of the CNS, and the study of the effects and dynamics of disease, deterioration, aging and injuries. The formulation here is applied to one- and three-link systems. Digital computer simulations of a two rigid body system are presented to demonstrate the feasibility and effectiveness of the approach and the methods.
Hybrid soft computing systems for electromyographic signals analysis: a review.
Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates
2014-02-03
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.
Hybrid soft computing systems for electromyographic signals analysis: a review
2014-01-01
Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979
Wu, Howard G.
2013-01-01
The planning of goal-directed movements is highly adaptable; however, the basic mechanisms underlying this adaptability are not well understood. Even the features of movement that drive adaptation are hotly debated, with some studies suggesting remapping of goal locations and others suggesting remapping of the movement vectors leading to goal locations. However, several previous motor learning studies and the multiplicity of the neural coding underlying visually guided reaching movements stand in contrast to this either/or debate on the modes of motor planning and adaptation. Here we hypothesize that, during visuomotor learning, the target location and movement vector of trained movements are separately remapped, and we propose a novel computational model for how motor plans based on these remappings are combined during the control of visually guided reaching in humans. To test this hypothesis, we designed a set of experimental manipulations that effectively dissociated the effects of remapping goal location and movement vector by examining the transfer of visuomotor adaptation to untrained movements and movement sequences throughout the workspace. The results reveal that (1) motor adaptation differentially remaps goal locations and movement vectors, and (2) separate motor plans based on these features are effectively averaged during motor execution. We then show that, without any free parameters, the computational model we developed for combining movement-vector-based and goal-location-based planning predicts nearly 90% of the variance in novel movement sequences, even when multiple attributes are simultaneously adapted, demonstrating for the first time the ability to predict how motor adaptation affects movement sequence planning. PMID:23804099
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Application of high-performance computing to numerical simulation of human movement
NASA Technical Reports Server (NTRS)
Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.
1995-01-01
We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.
Lévy-like diffusion in eye movements during spoken-language comprehension.
Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Lévy-like diffusion in eye movements during spoken-language comprehension
NASA Astrophysics Data System (ADS)
Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
One Dimensional Turing-Like Handshake Test for Motor Intelligence
Karniel, Amir; Avraham, Guy; Peles, Bat-Chen; Levy-Tzedek, Shelly; Nisky, Ilana
2010-01-01
In the Turing test, a computer model is deemed to "think intelligently" if it can generate answers that are not distinguishable from those of a human. However, this test is limited to the linguistic aspects of machine intelligence. A salient function of the brain is the control of movement, and the movement of the human hand is a sophisticated demonstration of this function. Therefore, we propose a Turing-like handshake test, for machine motor intelligence. We administer the test through a telerobotic system in which the interrogator is engaged in a task of holding a robotic stylus and interacting with another party (human or artificial). Instead of asking the interrogator whether the other party is a person or a computer program, we employ a two-alternative forced choice method and ask which of two systems is more human-like. We extract a quantitative grade for each model according to its resemblance to the human handshake motion and name it "Model Human-Likeness Grade" (MHLG). We present three methods to estimate the MHLG. (i) By calculating the proportion of subjects' answers that the model is more human-like than the human; (ii) By comparing two weighted sums of human and model handshakes we fit a psychometric curve and extract the point of subjective equality (PSE); (iii) By comparing a given model with a weighted sum of human and random signal, we fit a psychometric curve to the answers of the interrogator and extract the PSE for the weight of the human in the weighted sum. Altogether, we provide a protocol to test computational models of the human handshake. We believe that building a model is a necessary step in understanding any phenomenon and, in this case, in understanding the neural mechanisms responsible for the generation of the human handshake. PMID:21206462
Ehsani, Hossein; Rostami, Mostafa; Gudarzi, Mohammad
2016-02-01
Computation of muscle force patterns that produce specified movements of muscle-actuated dynamic models is an important and challenging problem. This problem is an undetermined one, and then a proper optimization is required to calculate muscle forces. The purpose of this paper is to develop a general model for calculating all muscle activation and force patterns in an arbitrary human body movement. For this aim, the equations of a multibody system forward dynamics, which is considered for skeletal system of the human body model, is derived using Lagrange-Euler formulation. Next, muscle contraction dynamics is added to this model and forward dynamics of an arbitrary musculoskeletal system is obtained. For optimization purpose, the obtained model is used in computed muscle control algorithm, and a closed-loop system for tracking desired motions is derived. Finally, a popular sport exercise, biceps curl, is simulated by using this algorithm and the validity of the obtained results is evaluated via EMG signals.
A Computational Model of Human Table Tennis for Robot Application
NASA Astrophysics Data System (ADS)
Mülling, Katharina; Peters, Jan
Table tennis is a difficult motor skill which requires all basic components of a general motor skill learning system. In order to get a step closer to such a generic approach to the automatic acquisition and refinement of table tennis, we study table tennis from a human motor control point of view. We make use of the basic models of discrete human movement phases, virtual hitting points, and the operational timing hypothesis. Using these components, we create a computational model which is aimed at reproducing human-like behavior. We verify the functionality of this model in a physically realistic simulation of a Barrett WAM.
Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography
2017-05-01
contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human
Automatic decoding of facial movements reveals deceptive pain expressions
Bartlett, Marian Stewart; Littlewort, Gwen C.; Frank, Mark G.; Lee, Kang
2014-01-01
Summary In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1–3]. Two motor pathways control facial movement [4–7]. A subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions. A cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8–11]. Machine vision may, however, be able to distinguish deceptive from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here we show that human observers could not discriminate real from faked expressions of pain better than chance, and after training, improved accuracy to a modest 55%. However a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system’s superiority is attributable to its ability to differentiate the dynamics of genuine from faked expressions. Thus by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling. PMID:24656830
How should Fitts' Law be applied to human-computer interaction?
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.
1992-01-01
The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.
On Fitts's and Hooke's laws: simple harmonic movement in upper-limb cyclical aiming.
Guiard, Y
1993-03-01
Can discrete, single-shot movements and continuous, cyclical movements be reduced to a single concept? In the classical, computational approach to human motor behaviour, cyclical aimed movement has generally been considered to derive from discrete primitives through a concatenation mechanism. Much importance, accordingly, has been attached to discrete-movement paradigms and to techniques allowing the segmentation of continuous data. An alternative approach, suggested by the nonlinear dynamical systems theory, views discreteness as a limiting case of cyclicity. Although attempts have been made recently to account for discrete movements in dynamical terms, cyclical paradigms have been favoured. The concatenation interpretation of cyclical aimed movement is criticized on the ground that it implies a complete waste of mechanical energy once in every half-cycle. Some kinematic data from a one-dimensional reciprocal (i.e., cyclical) aiming experiment are reported, suggesting that human subjects do save muscular efforts from one movement to the next in upper-limb cyclical aiming. The experiment demonstrated convergence on simple harmonic motion as aiming tolerance was increased, an outcome interpreted with reference to Hooke's law, in terms of the muscles' capability of storing potential, elastic energy across movement reversals. Not only is the concatenation concept problematic for understanding cyclical aimed movements, but the very reality of discrete movements is questionable too. It is pointed out that discrete motor acts of real life are composed of complete cycles, rather than half-cycles.
Acquisition and improvement of human motor skills: Learning through observation and practice
NASA Technical Reports Server (NTRS)
Iba, Wayne
1991-01-01
Skilled movement is an integral part of the human existence. A better understanding of motor skills and their development is a prerequisite to the construction of truly flexible intelligent agents. We present MAEANDER, a computational model of human motor behavior, that uniformly addresses both the acquisition of skills through observation and the improvement of skills through practice. MAEANDER consists of a sensory-effector interface, a memory of movements, and a set of performance and learning mechanisms that let it recognize and generate motor skills. The system initially acquires such skills by observing movements performed by another agent and constructing a concept hierarchy. Given a stored motor skill in memory, MAEANDER will cause an effector to behave appropriately. All learning involves changing the hierarchical memory of skill concepts to more closely correspond to either observed experience or to desired behaviors. We evaluated MAEANDER empirically with respect to how well it acquires and improves both artificial movement types and handwritten script letters from the alphabet. We also evaluate MAEANDER as a psychological model by comparing its behavior to robust phenomena in humans and by considering the richness of the predictions it makes.
Target switching in curved human arm movements is predicted by changing a single control parameter.
Hoffmann, Heiko
2011-01-01
Straight-line movements have been studied extensively in the human motor-control literature, but little is known about how to generate curved movements and how to adjust them in a dynamic environment. The present work studied, for the first time to my knowledge, how humans adjust curved hand movements to a target that switches location. Subjects (n = 8) sat in front of a drawing tablet and looked at a screen. They moved a cursor on a curved trajectory (spiral or oval shaped) toward a goal point. In half of the trials, this goal switched 200 ms after movement onset to either one of two alternative positions, and subjects smoothly adjusted their movements to the new goal. To explain this adjustment, we compared three computational models: a superposition of curved and minimum-jerk movements (Flash and Henis in J Cogn Neurosci 3(3):220-230, 1991), Vector Planning (Gordon et al. in Exp Brain Res 99(1):97-111, 1994) adapted to curved movements (Rescale), and a nonlinear dynamical system, which could generate arbitrarily curved smooth movements and had a point attractor at the goal. For each model, we predicted the trajectory adjustment to the target switch by changing only the goal position in the model. As result, the dynamical model could explain the observed switch behavior significantly better than the two alternative models (spiral: P = 0.0002 vs. Flash, P = 0.002 vs. Rescale; oval: P = 0.04 vs. Flash; P values obtained from Wilcoxon test on R (2) values). We conclude that generalizing arbitrary hand trajectories to new targets may be explained by switching a single control command, without the need to re-plan or re-optimize the whole movement or superimpose movements.
Effect of Different Movement Speed Modes on Human Action Observation: An EEG Study.
Luo, Tian-Jian; Lv, Jitu; Chao, Fei; Zhou, Changle
2018-01-01
Action observation (AO) generates event-related desynchronization (ERD) suppressions in the human brain by activating partial regions of the human mirror neuron system (hMNS). The activation of the hMNS response to AO remains controversial for several reasons. Therefore, this study investigated the activation of the hMNS response to a speed factor of AO by controlling the movement speed modes of a humanoid robot's arm movements. Since hMNS activation is reflected by ERD suppressions, electroencephalography (EEG) with BCI analysis methods for ERD suppressions were used as the recording and analysis modalities. Six healthy individuals were asked to participate in experiments comprising five different conditions. Four incremental-speed AO tasks and a motor imagery (MI) task involving imaging of the same movement were presented to the individuals. Occipital and sensorimotor regions were selected for BCI analyses. The experimental results showed that hMNS activation was higher in the occipital region but more robust in the sensorimotor region. Since the attended information impacts the activations of the hMNS during AO, the pattern of hMNS activations first rises and subsequently falls to a stable level during incremental-speed modes of AO. The discipline curves suggested that a moderate speed within a decent inter-stimulus interval (ISI) range produced the highest hMNS activations. Since a brain computer/machine interface (BCI) builds a path-way between human and computer/mahcine, the discipline curves will help to construct BCIs made by patterns of action observation (AO-BCI). Furthermore, a new method for constructing non-invasive brain machine brain interfaces (BMBIs) with moderate AO-BCI and motor imagery BCI (MI-BCI) was inspired by this paper.
2017-01-01
Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs) for robust movement decoding of Parkinson's disease (PD) and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value) at about 0.729 ± 0.16 for decoding movement from the resting state and about 0.671 ± 0.14 for decoding left and right visually cued movements. PMID:29201041
Human body motion tracking based on quantum-inspired immune cloning algorithm
NASA Astrophysics Data System (ADS)
Han, Hong; Yue, Lichuan; Jiao, Licheng; Wu, Xing
2009-10-01
In a static monocular camera system, to gain a perfect 3D human body posture is a great challenge for Computer Vision technology now. This paper presented human postures recognition from video sequences using the Quantum-Inspired Immune Cloning Algorithm (QICA). The algorithm included three parts. Firstly, prior knowledge of human beings was used, the key joint points of human could be detected automatically from the human contours and skeletons which could be thinning from the contours; And due to the complexity of human movement, a forecasting mechanism of occlusion joint points was addressed to get optimum 2D key joint points of human body; And then pose estimation recovered by optimizing between the 2D projection of 3D human key joint points and 2D detection key joint points using QICA, which recovered the movement of human body perfectly, because this algorithm could acquire not only the global optimal solution, but the local optimal solution.
Tracking the Evolution of Smartphone Sensing for Monitoring Human Movement.
del Rosario, Michael B; Redmond, Stephen J; Lovell, Nigel H
2015-07-31
Advances in mobile technology have led to the emergence of the "smartphone", a new class of device with more advanced connectivity features that have quickly made it a constant presence in our lives. Smartphones are equipped with comparatively advanced computing capabilities, a global positioning system (GPS) receivers, and sensing capabilities (i.e., an inertial measurement unit (IMU) and more recently magnetometer and barometer) which can be found in wearable ambulatory monitors (WAMs). As a result, algorithms initially developed for WAMs that "count" steps (i.e., pedometers); gauge physical activity levels; indirectly estimate energy expenditure and monitor human movement can be utilised on the smartphone. These algorithms may enable clinicians to "close the loop" by prescribing timely interventions to improve or maintain wellbeing in populations who are at risk of falling or suffer from a chronic disease whose progression is linked to a reduction in movement and mobility. The ubiquitous nature of smartphone technology makes it the ideal platform from which human movement can be remotely monitored without the expense of purchasing, and inconvenience of using, a dedicated WAM. In this paper, an overview of the sensors that can be found in the smartphone are presented, followed by a summary of the developments in this field with an emphasis on the evolution of algorithms used to classify human movement. The limitations identified in the literature will be discussed, as well as suggestions about future research directions.
Popivanov, D; Mineva, A; Krekule, I
1999-05-21
In experiments with EEG accompanying continuous slow goal-directed voluntary movements we found abrupt short-term transients (STs) of the coefficients of EEG time-varying autoregressive (TVAR) model. The onset of STs indicated (i) a positive EEG wave related to an increase of 3-7 Hz oscillations in time period before the movement start, (ii) synchronization of 35-40 Hz prior to movement start and during the movement when the target is nearly reached. Both these phenomena are expressed predominantly over supplementary motor area, premotor and parietal cortices. These patterns were detected after averaging of EEG segments synchronized to the abrupt changes of the TVAR coefficients computed in the time course of EEG single records. The results are discussed regarding the cognitive aspect of organization of goal-directed movements.
Ocular attention-sensing interface system
NASA Technical Reports Server (NTRS)
Zaklad, Allen; Glenn, Floyd A., III; Iavecchia, Helene P.; Stokes, James M.
1986-01-01
The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload.
Theta synchronization networks emerge during human object-place memory encoding.
Sato, Naoyuki; Yamaguchi, Yoko
2007-03-26
Recent rodent hippocampus studies have suggested that theta rhythm-dependent neural dynamics ('theta phase precession') is essential for an on-line memory formation. A computational study indicated that the phase precession enables a human object-place association memory with voluntary eye movements, although it is still an open question whether the human brain uses the dynamics. Here we elucidated subsequent memory-correlated activities in human scalp electroencephalography in an object-place association memory designed according the former computational study. Our results successfully demonstrated that subsequent memory recall is characterized by an increase in theta power and coherence, and further, that multiple theta synchronization networks emerge. These findings suggest the human theta dynamics in common with rodents in episodic memory formation.
Takemura, Naohiro; Fukui, Takao; Inui, Toshio
2015-01-01
In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874
Shared periodic performer movements coordinate interactions in duo improvisations.
Eerola, Tuomas; Jakubowski, Kelly; Moran, Nikki; Keller, Peter E; Clayton, Martin
2018-02-01
Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets-(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations-to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers' movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers' movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions.
Kurzynski, Marek; Jaskolska, Anna; Marusiak, Jaroslaw; Wolczowski, Andrzej; Bierut, Przemyslaw; Szumowski, Lukasz; Witkowski, Jerzy; Kisiel-Sajewicz, Katarzyna
2017-08-01
One of the biggest problems of upper limb transplantation is lack of certainty as to whether a patient will be able to control voluntary movements of transplanted hands. Based on findings of the recent research on brain cortex plasticity, a premise can be drawn that mental training supported with visual and sensory feedback can cause structural and functional reorganization of the sensorimotor cortex, which leads to recovery of function associated with the control of movements performed by the upper limbs. In this study, authors - based on the above observations - propose the computer-aided training (CAT) system, which generating visual and sensory stimuli, should enhance the effectiveness of mental training applied to humans before upper limb transplantation. The basis for the concept of computer-aided training system is a virtual hand whose reaching and grasping movements the trained patient can observe on the VR headset screen (visual feedback) and whose contact with virtual objects the patient can feel as a touch (sensory feedback). The computer training system is composed of three main components: (1) the system generating 3D virtual world in which the patient sees the virtual limb from the perspective as if it were his/her own hand; (2) sensory feedback transforming information about the interaction of the virtual hand with the grasped object into mechanical vibration; (3) the therapist's panel for controlling the training course. Results of the case study demonstrate that mental training supported with visual and sensory stimuli generated by the computer system leads to a beneficial change of the brain activity related to motor control of the reaching in the patient with bilateral upper limb congenital transverse deficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
Population Dynamics of Early Human Migration in Britain
Vahia, Mayank N.; Ladiwala, Uma; Mahathe, Pavan; Mathur, Deepak
2016-01-01
Background Early human migration is largely determined by geography and human needs. These are both deterministic parameters when small populations move into unoccupied areas where conflicts and large group dynamics are not important. The early period of human migration into the British Isles provides such a laboratory which, because of its relative geographical isolation, may allow some insights into the complex dynamics of early human migration and interaction. Method and Results We developed a simulation code based on human affinity to habitable land, as defined by availability of water sources, altitude, and flatness of land, in choosing the path of migration. Movement of people on the British island over the prehistoric period from their initial entry points was simulated on the basis of data from the megalithic period. Topographical and hydro-shed data from satellite databases was used to define habitability, based on distance from water bodies, flatness of the terrain, and altitude above sea level. We simulated population movement based on assumptions of affinity for more habitable places, with the rate of movement tempered by existing populations. We compared results of our computer simulations with genetic data and show that our simulation can predict fairly accurately the points of contacts between different migratory paths. Such comparison also provides more detailed information about the path of peoples’ movement over ~2000 years before the present era. Conclusions We demonstrate an accurate method to simulate prehistoric movements of people based upon current topographical satellite data. Our findings are validated by recently-available genetic data. Our method may prove useful in determining early human population dynamics even when no genetic information is available. PMID:27148959
Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G
2016-08-01
The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.
Saccadic eye movements analysis as a measure of drug effect on central nervous system function.
Tedeschi, G; Quattrone, A; Bonavita, V
1986-04-01
Peak velocity (PSV) and duration (SD) of horizontal saccadic eye movements are demonstrably under the control of specific brain stem structures. Experimental and clinical evidence suggest the existence of an immediate premotor system for saccade generation located in the paramedian pontine reticular formation (PPRF). Effects on saccadic eye movements have been studied in normal volunteers with barbiturates, benzodiazepines, amphetamine and ethanol. On two occasions computer analysis of PSV, SD, saccade reaction time (SRT) and saccade accuracy (SA) was carried out in comparison with more traditional methods of assessment of human psychomotor performance like choice reaction time (CRT) and critical flicker fusion threshold (CFFT). The computer system proved to be a highly sensitive and objective method for measuring drug effect on central nervous system (CNS) function. It allows almost continuous sampling of data and appears to be particularly suitable for studying rapidly changing drug effects on the CNS.
Emadi Andani, Mehran; Bahrami, Fariba
2012-10-01
Flash and Hogan (1985) suggested that the CNS employs a minimum jerk strategy when planning any given movement. Later, Nakano et al. (1999) showed that minimum angle jerk predicts the actual arm trajectory curvature better than the minimum jerk model. Friedman and Flash (2009) confirmed this claim. Besides the behavioral support that we will discuss, we will show that this model allows simplicity in planning any given movement. In particular, we prove mathematically that each movement that satisfies the minimum joint angle jerk condition is reproducible by a linear combination of six functions. These functions are calculated independent of the type of the movement and are normalized in the time domain. Hence, we call these six universal functions the Movement Elements (ME). We also show that the kinematic information at the beginning and end of the movement determines the coefficients of the linear combination. On the other hand, in analyzing recorded data from sit-to-stand (STS) transfer, arm-reaching movement (ARM) and gait, we observed that minimum joint angle jerk condition is satisfied only during different successive phases of these movements and not for the entire movement. Driven by these observations, we assumed that any given ballistic movement may be decomposed into several successive phases without overlap, such that for each phase the minimum joint angle jerk condition is satisfied. At the boundaries of each phase the angular acceleration of each joint should obtain its extremum (zero third derivative). As a consequence, joint angles at each phase will be linear combinations of the introduced MEs. Coefficients of the linear combination at each phase are the values of the joint kinematics at the boundaries of that phase. Finally, we conclude that these observations may constitute the basis of a computational interpretation, put differently, of the strategy used by the Central Nervous System (CNS) for motor planning. We call this possible interpretation "Coordinated Minimum Angle jerk Policy" or COMAP. Based on this policy, the function of the CNS in generating the desired pattern of any given task (like STS, ARM or gait) can be described computationally using three factors: (1) the kinematics of the motor system at given body states, i.e., at certain movement events/instances, (2) the time length of each phase, and (3) the proposed MEs. From a computational point of view, this model significantly simplifies the processes of movement planning as well as feature abstraction for saving characterizing information of any given movement in memory. Copyright © 2012 Elsevier B.V. All rights reserved.
Scripting human animations in a virtual environment
NASA Technical Reports Server (NTRS)
Goldsby, Michael E.; Pandya, Abhilash K.; Maida, James C.
1994-01-01
The current deficiencies of virtual environment (VE) are well known: annoying lag time in drawing the current view, drastically simplified environments to reduce that time lag, low resolution and narrow field of view. Animation scripting is an application of VE technology which can be carried out successfully despite these deficiencies. The final product is a smoothly moving high resolution animation displaying detailed models. In this system, the user is represented by a human computer model with the same body proportions. Using magnetic tracking, the motions of the model's upper torso, head and arms are controlled by the user's movements (18 degrees of freedom). The model's lower torso and global position and orientation are controlled by a spaceball and keypad (12 degrees of freedom). Using this system human motion scripts can be extracted from the user's movements while immersed in a simplified virtual environment. Recorded data is used to define key frames; motion is interpolated between them and post processing adds a more detailed environment. The result is a considerable savings in time and a much more natural-looking movement of a human figure in a smooth and seamless animation.
Spinal circuits can accommodate interaction torques during multijoint limb movements.
Buhrmann, Thomas; Di Paolo, Ezequiel A
2014-01-01
The dynamic interaction of limb segments during movements that involve multiple joints creates torques in one joint due to motion about another. Evidence shows that such interaction torques are taken into account during the planning or control of movement in humans. Two alternative hypotheses could explain the compensation of these dynamic torques. One involves the use of internal models to centrally compute predicted interaction torques and their explicit compensation through anticipatory adjustment of descending motor commands. The alternative, based on the equilibrium-point hypothesis, claims that descending signals can be simple and related to the desired movement kinematics only, while spinal feedback mechanisms are responsible for the appropriate creation and coordination of dynamic muscle forces. Partial supporting evidence exists in each case. However, until now no model has explicitly shown, in the case of the second hypothesis, whether peripheral feedback is really sufficient on its own for coordinating the motion of several joints while at the same time accommodating intersegmental interaction torques. Here we propose a minimal computational model to examine this question. Using a biomechanics simulation of a two-joint arm controlled by spinal neural circuitry, we show for the first time that it is indeed possible for the neuromusculoskeletal system to transform simple descending control signals into muscle activation patterns that accommodate interaction forces depending on their direction and magnitude. This is achieved without the aid of any central predictive signal. Even though the model makes various simplifications and abstractions compared to the complexities involved in the control of human arm movements, the finding lends plausibility to the hypothesis that some multijoint movements can in principle be controlled even in the absence of internal models of intersegmental dynamics or learned compensatory motor signals.
Physics-based analysis and control of human snoring
NASA Astrophysics Data System (ADS)
Sanchez, Yaselly; Wang, Junshi; Han, Pan; Xi, Jinxiang; Dong, Haibo
2017-11-01
In order to advance the understanding of biological fluid dynamics and its effects on the acoustics of human snoring, the study pursued a physics-based computational approach. From human magnetic resonance image (MRI) scans, the researchers were able to develop both anatomically and dynamically accurate airway-uvula models. With airways defined as rigid, and the uvula defined as flexible, computational models were created with various pharynx thickness and geometries. In order to determine vortex shedding with prescribed uvula movement, the uvula fluctuation was categorized by its specific parameters: magnitude, frequency, and phase lag. Uvula vibration modes were based on one oscillation, or one harmonic frequency, and pressure probes were located in seven different positions throughout the airway-uvula model. By taking fast Fourier transforms (FFT) from the pressure probe data, it was seen that four harmonics were created throughout the simulation within one oscillation of uvula movement. Of the four harmonics, there were two pressure probes which maintained high amplitudes and led the researcher to believe that different vortices formed with different snoring frequencies. This work is supported by the NSF Grant CBET-1605434.
Music and movement share a dynamic structure that supports universal expressions of emotion
Sievers, Beau; Polansky, Larry; Casey, Michael; Wheatley, Thalia
2013-01-01
Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures. PMID:23248314
Computer mouse movement patterns: A potential marker of mild cognitive impairment.
Seelye, Adriana; Hagler, Stuart; Mattek, Nora; Howieson, Diane B; Wild, Katherine; Dodge, Hiroko H; Kaye, Jeffrey A
2015-12-01
Subtle changes in cognitively demanding activities occur in MCI but are difficult to assess with conventional methods. In an exploratory study, we examined whether patterns of computer mouse movements obtained from routine home computer use discriminated between older adults with and without MCI. Participants were 42 cognitively intact and 20 older adults with MCI enrolled in a longitudinal study of in-home monitoring technologies. Mouse pointer movement variables were computed during one week of routine home computer use using algorithms that identified and characterized mouse movements within each computer use session. MCI was associated with making significantly fewer total mouse moves ( p <.01), and making mouse movements that were more variable, less efficient, and with longer pauses between movements ( p <.05). Mouse movement measures were significantly associated with several cognitive domains ( p 's<.01-.05). Remotely monitored computer mouse movement patterns are a potential early marker of real-world cognitive changes in MCI.
Implantable brain computer interface: challenges to neurotechnology translation.
Konrad, Peter; Shanks, Todd
2010-06-01
This article reviews three concepts related to implantable brain computer interface (BCI) devices being designed for human use: neural signal extraction primarily for motor commands, signal insertion to restore sensation, and technological challenges that remain. A significant body of literature has occurred over the past four decades regarding motor cortex signal extraction for upper extremity movement or computer interface. However, little is discussed regarding postural or ambulation command signaling. Auditory prosthesis research continues to represent the majority of literature on BCI signal insertion. Significant hurdles continue in the technological translation of BCI implants. These include developing a stable neural interface, significantly increasing signal processing capabilities, and methods of data transfer throughout the human body. The past few years, however, have provided extraordinary human examples of BCI implant potential. Despite technological hurdles, proof-of-concept animal and human studies provide significant encouragement that BCI implants may well find their way into mainstream medical practice in the foreseeable future.
Rapid and long-lasting plasticity of input-output mapping.
Yamamoto, Kenji; Hoffman, Donna S; Strick, Peter L
2006-11-01
Skilled use of tools requires us to learn an "input-output map" for the device, i.e., how our movements relate to the actions of the device. We used the paradigm of visuo-motor rotation to examine two questions about the plasticity of input-output maps: 1) does extensive practice on one mapping make it difficult to modify and/or to form a new input-output map and 2) once a map has been modified or a new map has been formed, does this map survive a gap in performance? Humans and monkeys made wrist movements to control the position of a cursor on a computer monitor. Humans practiced the task for approximately 1.5 h; monkeys practiced for 3-9 yr. After this practice, we gradually altered the direction of cursor movement relative to wrist movement while subjects moved either to a single target or to four targets. Subjects were unaware of the change in cursor-movement relationship. Despite their prior practice on the task, the humans and the monkeys quickly adjusted their motor output to compensate for the visuo-motor rotation. Monkeys retained the modified input-output map during a 2-wk gap in motor performance. Humans retained the altered map during a gap of >1 yr. Our results show that sensorimotor performance remains flexible despite considerable practice on a specific task, and even relatively short-term exposure to a new input-output mapping leads to a long-lasting change in motor performance.
Tracking the Evolution of Smartphone Sensing for Monitoring Human Movement
del Rosario, Michael B.; Redmond, Stephen J.; Lovell, Nigel H.
2015-01-01
Advances in mobile technology have led to the emergence of the “smartphone”, a new class of device with more advanced connectivity features that have quickly made it a constant presence in our lives. Smartphones are equipped with comparatively advanced computing capabilities, a global positioning system (GPS) receivers, and sensing capabilities (i.e., an inertial measurement unit (IMU) and more recently magnetometer and barometer) which can be found in wearable ambulatory monitors (WAMs). As a result, algorithms initially developed for WAMs that “count” steps (i.e., pedometers); gauge physical activity levels; indirectly estimate energy expenditure and monitor human movement can be utilised on the smartphone. These algorithms may enable clinicians to “close the loop” by prescribing timely interventions to improve or maintain wellbeing in populations who are at risk of falling or suffer from a chronic disease whose progression is linked to a reduction in movement and mobility. The ubiquitous nature of smartphone technology makes it the ideal platform from which human movement can be remotely monitored without the expense of purchasing, and inconvenience of using, a dedicated WAM. In this paper, an overview of the sensors that can be found in the smartphone are presented, followed by a summary of the developments in this field with an emphasis on the evolution of algorithms used to classify human movement. The limitations identified in the literature will be discussed, as well as suggestions about future research directions. PMID:26263998
A musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir; Runnova, Anastasia; Pchelintseva, Svetlana; Efremova, Tatiana; Zhuravlev, Maksim; Pisarchik, Alexander
2018-04-01
We have considered time-frequency and spatio-temporal structure of electrical brain activity, associated with real and imaginary movements based on the multichannel EEG recordings. We have found that along with wellknown effects of event-related desynchronization (ERD) in α/μ - rhythms and β - rhythm, these types of activity are accompanied by the either ERS (for real movement) or ERD (for imaginary movement) in low-frequency δ - band, located mostly in frontal lobe. This may be caused by the associated processes of decision making, which take place when subject is deciding either perform the movement or imagine it. Obtained features have been found in untrained subject which it its turn gives the possibility to use our results in the development of brain-computer interfaces for controlling anthropomorphic robotic arm.
Identifying Stride-To-Stride Control Strategies in Human Treadmill Walking
Dingwell, Jonathan B.; Cusumano, Joseph P.
2015-01-01
Variability is ubiquitous in human movement, arising from internal and external noise, inherent biological redundancy, and from the neurophysiological control actions that help regulate movement fluctuations. Increased walking variability can lead to increased energetic cost and/or increased fall risk. Conversely, biological noise may be beneficial, even necessary, to enhance motor performance. Indeed, encouraging more variability actually facilitates greater improvements in some forms of locomotor rehabilitation. Thus, it is critical to identify the fundamental principles humans use to regulate stride-to-stride fluctuations in walking. This study sought to determine how humans regulate stride-to-stride fluctuations in stepping movements during treadmill walking. We developed computational models based on pre-defined goal functions to compare if subjects, from each stride to the next, tried to maintain the same speed as the treadmill, or instead stay in the same position on the treadmill. Both strategies predicted average behaviors empirically indistinguishable from each other and from that of humans. These strategies, however, predicted very different stride-to-stride fluctuation dynamics. Comparisons to experimental data showed that human stepping movements were generally well-predicted by the speed-control model, but not by the position-control model. Human subjects also exhibited no indications they corrected deviations in absolute position only intermittently: i.e., closer to the boundaries of the treadmill. Thus, humans clearly do not adopt a control strategy whose primary goal is to maintain some constant absolute position on the treadmill. Instead, humans appear to regulate their stepping movements in a way most consistent with a strategy whose primary goal is to try to maintain the same speed as the treadmill at each consecutive stride. These findings have important implications both for understanding how biological systems regulate walking in general and for being able to harness these mechanisms to develop more effective rehabilitation interventions to improve locomotor performance. PMID:25910253
Witchel, Harry J.; Santos, Carlos P.; Ackah, James K.; Westling, Carina E. I.; Chockalingam, Nachiappan
2016-01-01
Background: Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. Hypotheses: (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Methods: Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. Results: For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. Conclusions: NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus. PMID:26941666
Witchel, Harry J; Santos, Carlos P; Ackah, James K; Westling, Carina E I; Chockalingam, Nachiappan
2016-01-01
Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus.
Transitions between discrete and rhythmic primitives in a unimanual task
Sternad, Dagmar; Marino, Hamal; Charles, Steven K.; Duarte, Marcos; Dipietro, Laura; Hogan, Neville
2013-01-01
Given the vast complexity of human actions and interactions with objects, we proposed that control of sensorimotor behavior may utilize dynamic primitives. However, greater computational simplicity may come at the cost of reduced versatility. Evidence for primitives may be garnered by revealing such limitations. This study tested subjects performing a sequence of progressively faster discrete movements in order to “stress” the system. We hypothesized that the increasing pace would elicit a transition to rhythmic movements, assumed to be computationally and neurally more efficient. Abrupt transitions between the two types of movements would support the hypothesis that rhythmic and discrete movements are distinct primitives. Ten subjects performed planar point-to-point arm movements paced by a metronome: starting at 2 s, the metronome intervals decreased by 36 ms per cycle to 200 ms, stayed at 200 ms for several cycles, then increased by similar increments. Instructions emphasized to insert explicit stops between each movement with a duration that equaled the movement time. The experiment was performed with eyes open and closed, and with short and long metronome sounds, the latter explicitly specifying the dwell duration. Results showed that subjects matched instructed movement times but did not preserve the dwell times. Rather, they progressively reduced dwell time to zero, transitioning to continuous rhythmic movements before movement times reached their minimum. The acceleration profiles showed an abrupt change between discrete and rhythmic profiles. The loss of dwell time occurred earlier with long auditory specification, when subjects also showed evidence of predictive control. While evidence for hysteresis was weak, taken together, the results clearly indicated a transition between discrete and rhythmic movements, supporting the proposal that representation is based on primitives rather than on veridical internal models. PMID:23888139
Central mechanisms for force and motion--towards computational synthesis of human movement.
Hemami, Hooshang; Dariush, Behzad
2012-12-01
Anatomical, physiological and experimental research on the human body can be supplemented by computational synthesis of the human body for all movement: routine daily activities, sports, dancing, and artistic and exploratory involvements. The synthesis requires thorough knowledge about all subsystems of the human body and their interactions, and allows for integration of known knowledge in working modules. It also affords confirmation and/or verification of scientific hypotheses about workings of the central nervous system (CNS). A simple step in this direction is explored here for controlling the forces of constraint. It requires co-activation of agonist-antagonist musculature. The desired trajectories of motion and the force of contact have to be provided by the CNS. The spinal control involves projection onto a muscular subset that induces the force of contact. The projection of force in the sensory motor cortex is implemented via a well-defined neural population unit, and is executed in the spinal cord by a standard integral controller requiring input from tendon organs. The sensory motor cortex structure is extended to the case for directing motion via two neural population units with vision input and spindle efferents. Digital computer simulations show the feasibility of the system. The formulation is modular and can be extended to multi-link limbs, robot and humanoid systems with many pairs of actuators or muscles. It can be expanded to include reticular activating structures and learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Shared periodic performer movements coordinate interactions in duo improvisations
Jakubowski, Kelly; Moran, Nikki; Keller, Peter E.
2018-01-01
Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets—(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations—to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers’ movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers’ movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions. PMID:29515867
Involuntary human hand movements due to FM radio waves in a moving van.
Huttunen, P; Savinainen, A; Hänninen, Osmo; Myllylä, R
2011-06-01
Finland TRACT Involuntary movements of hands in a moving van on a public road were studied to clarify the possible role of frequency modulated radio waves on driving. The signals were measured in a direct 2 km test segment of an international road during repeated drives to both directions. Test subjects (n=4) had an ability to sense radio frequency field intensity variations of the environment. They were sitting in a minivan with arm movement detectors in their hands. A potentiometer was used to register the hand movements to a computer which simultaneously collected data on the amplitude of the RF signal of the local FM tower 30 km distance at a frequency of about 100 MHz. Involuntary hand movements of the test subjects correlated with electromagnetic field, i.e. FM radio wave intensity measured. They reacted also on the place of a geomagnetic anomaly crossing the road, which was found on the basis of these recordings and confirmed by the public geological maps of the area.In conclusion, RF irradiation seems to affect the human hand reflexes of sensitive persons in a moving van along a normal public road which may have significance in traffic safety.
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-01-01
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware. PMID:28208736
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-02-08
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates ( F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.
Fast attainment of computer cursor control with noninvasively acquired brain signals
NASA Astrophysics Data System (ADS)
Bradberry, Trent J.; Gentili, Rodolphe J.; Contreras-Vidal, José L.
2011-06-01
Brain-computer interface (BCI) systems are allowing humans and non-human primates to drive prosthetic devices such as computer cursors and artificial arms with just their thoughts. Invasive BCI systems acquire neural signals with intracranial or subdural electrodes, while noninvasive BCI systems typically acquire neural signals with scalp electroencephalography (EEG). Some drawbacks of invasive BCI systems are the inherent risks of surgery and gradual degradation of signal integrity. A limitation of noninvasive BCI systems for two-dimensional control of a cursor, in particular those based on sensorimotor rhythms, is the lengthy training time required by users to achieve satisfactory performance. Here we describe a novel approach to continuously decoding imagined movements from EEG signals in a BCI experiment with reduced training time. We demonstrate that, using our noninvasive BCI system and observational learning, subjects were able to accomplish two-dimensional control of a cursor with performance levels comparable to those of invasive BCI systems. Compared to other studies of noninvasive BCI systems, training time was substantially reduced, requiring only a single session of decoder calibration (~20 min) and subject practice (~20 min). In addition, we used standardized low-resolution brain electromagnetic tomography to reveal that the neural sources that encoded observed cursor movement may implicate a human mirror neuron system. These findings offer the potential to continuously control complex devices such as robotic arms with one's mind without lengthy training or surgery.
A Computational Model of Active Vision for Visual Search in Human-Computer Interaction
2010-08-01
processors that interact with the production rules to produce behavior, and (c) parameters that constrain the behavior of the model (e.g., the...velocity of a saccadic eye movement). While the parameters can be task-specific, the majority of the parameters are usually fixed across a wide variety...previously estimated durations. Hooge and Erkelens (1996) review these four explanations of fixation duration control. A variety of research
Decoding Onset and Direction of Movements Using Electrocorticographic (ECoG) Signals in Humans
2012-08-08
Institute, Troy, NY, USA 2 J Crayton Pruitt Family Department of Biomed Engineering, University of Florida, Gainesville, FL, USA 3 BCI R&D Program...INTRODUCTION Brain-computer interfaces ( BCIs ) aim to translate a person’s intentions into meaningful computer commands using brain activity alone...applications for those suffering from neuromuscular disorders (Sejnowski et al., 2007; Tan and Nijholt, 2010). For example, a BCI that detects intended move
Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data
Munoz-Organero, Mario; Lotfi, Ahmad
2016-01-01
Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063
NASA Astrophysics Data System (ADS)
Wu, Di; Torres, Elizabeth B.; Jose, Jorge V.
2015-03-01
ASD is a spectrum of neurodevelopmental disorders. The high heterogeneity of the symptoms associated with the disorder impedes efficient diagnoses based on human observations. Recent advances with high-resolution MEM wearable sensors enable accurate movement measurements that may escape the naked eye. It calls for objective metrics to extract physiological relevant information from the rapidly accumulating data. In this talk we'll discuss the statistical analysis of movement data continuously collected with high-resolution sensors at 240Hz. We calculated statistical properties of speed fluctuations within the millisecond time range that closely correlate with the subjects' cognitive abilities. We computed the periodicity and synchronicity of the speed fluctuations' from their power spectrum and ensemble averaged two-point cross-correlation function. We built a two-parameter phase space from the temporal statistical analyses of the nearest neighbor fluctuations that provided a quantitative biomarker for ASD and adult normal subjects and further classified ASD severity. We also found age related developmental statistical signatures and potential ASD parental links in our movement dynamical studies. Our results may have direct clinical applications.
Hennion, P Y; Mollard, R
1993-01-01
Under conditions of prolonged space flight, it may be feasible to restore gravity artificially using centrifugal inertial forces in a spinning vehicle. As a result, the motion of the passengers relative to the vehicle is affected by Coriolis forces. The aim of this study is to propose a theoretical method to evaluate the extent of these effects compared to other inertial or motor forces affecting movement. We investigated typical right upper limb movement in a numerical model with a two-solid-links mechanism, including a spherical joint for the shoulder and a hinge joint for the elbow. The inertial and dimensional characteristics of this model derive from measurements and computations obtained on laboratory subjects. The same is true for the movements assigned to the model. These were inferred from actual recordings of arm movement when the subject presses a button placed in front of him with his index finger. From these relative velocities, the resulting forces and moments applied to the elbow and the shoulder were computed for a 1 rad s-1 rotational speed of transport motion, using classical kinetic relations. The result is that the Coriolis moments are of the same order of magnitude as the corresponding inertial moments and one-tenth of the value of a typical elbow flexion moment. Thus, they should cause a significant disturbance in movement.
Eye movement sequence generation in humans: Motor or goal updating?
Quaia, Christian; Joiner, Wilsaan M.; FitzGibbon, Edmond J.; Optican, Lance M.; Smith, Maurice A.
2011-01-01
Saccadic eye movements are often grouped in pre-programmed sequences. The mechanism underlying the generation of each saccade in a sequence is currently poorly understood. Broadly speaking, two alternative schemes are possible: first, after each saccade the retinotopic location of the next target could be estimated, and an appropriate saccade could be generated. We call this the goal updating hypothesis. Alternatively, multiple motor plans could be pre-computed, and they could then be updated after each movement. We call this the motor updating hypothesis. We used McLaughlin’s intra-saccadic step paradigm to artificially create a condition under which these two hypotheses make discriminable predictions. We found that in human subjects, when sequences of two saccades are planned, the motor updating hypothesis predicts the landing position of the second saccade in two-saccade sequences much better than the goal updating hypothesis. This finding suggests that the human saccadic system is capable of executing sequences of saccades to multiple targets by planning multiple motor commands, which are then updated by serial subtraction of ongoing motor output. PMID:21191134
Sreenivasa, Manish; Ayusawa, Ko; Nakamura, Yoshihiko
2016-05-01
This study develops a multi-level neuromuscular model consisting of topological pools of spiking motor, sensory and interneurons controlling a bi-muscular model of the human arm. The spiking output of motor neuron pools were used to drive muscle actions and skeletal movement via neuromuscular junctions. Feedback information from muscle spindles were relayed via monosynaptic excitatory and disynaptic inhibitory connections, to simulate spinal afferent pathways. Subject-specific model parameters were identified from human experiments by using inverse dynamics computations and optimization methods. The identified neuromuscular model was used to simulate the biceps stretch reflex and the results were compared to an independent dataset. The proposed model was able to track the recorded data and produce dynamically consistent neural spiking patterns, muscle forces and movement kinematics under varying conditions of external forces and co-contraction levels. This additional layer of detail in neuromuscular models has important relevance to the research communities of rehabilitation and clinical movement analysis by providing a mathematical approach to studying neuromuscular pathology.
Seeber, Martin; Scherer, Reinhold; Müller-Putz, Gernot R
2016-11-16
Sequencing and timing of body movements are essential to perform motoric tasks. In this study, we investigate the temporal relation between cortical oscillations and human motor behavior (i.e., rhythmic finger movements). High-density EEG recordings were used for source imaging based on individual anatomy. We separated sustained and movement phase-related EEG source amplitudes based on the actual finger movements recorded by a data glove. Sustained amplitude modulations in the contralateral hand area show decrease for α (10-12 Hz) and β (18-24 Hz), but increase for high γ (60-80 Hz) frequencies during the entire movement period. Additionally, we found movement phase-related amplitudes, which resembled the flexion and extension sequence of the fingers. Especially for faster movement cadences, movement phase-related amplitudes included high β (24-30 Hz) frequencies in prefrontal areas. Interestingly, the spectral profiles and source patterns of movement phase-related amplitudes differed from sustained activities, suggesting that they represent different frequency-specific large-scale networks. First, networks were signified by the sustained element, which statically modulate their synchrony levels during continuous movements. These networks may upregulate neuronal excitability in brain regions specific to the limb, in this study the right hand area. Second, movement phase-related networks, which modulate their synchrony in relation to the movement sequence. We suggest that these frequency-specific networks are associated with distinct functions, including top-down control, sensorimotor prediction, and integration. The separation of different large-scale networks, we applied in this work, improves the interpretation of EEG sources in relation to human motor behavior. EEG recordings provide high temporal resolution suitable to relate cortical oscillations to actual movements. Investigating EEG sources during rhythmic finger movements, we distinguish sustained from movement phase-related amplitude modulations. We separate these two EEG source elements motivated by our previous findings in gait. Here, we found two types of large-scale networks, representing the right fingers in distinction from the time sequence of the movements. These findings suggest that EEG source amplitudes reconstructed in a cortical patch are the superposition of these simultaneously present network activities. Separating these frequency-specific networks is relevant for studying function and possible dysfunction of the cortical sensorimotor system in humans as well as to provide more advanced features for brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/3611671-11$15.00/0.
NASA Astrophysics Data System (ADS)
Boudria, Yacine; Feltane, Amal; Besio, Walter
2014-06-01
Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.
NASA Astrophysics Data System (ADS)
Risto, S.; Kallergi, M.
2015-09-01
The purpose of this project was to model and simulate the knee joint. A computer model of the knee joint was first created, which was controlled by Microsoft's Kinect for Windows. Kinect created a depth map of the knee and lower leg motion independent of lighting conditions through an infrared sensor. A combination of open source software such as Blender, Python, Kinect SDK and NI_Mate were implemented for the creation and control of the simulated knee based on movements of a live physical model. A physical size model of the knee and lower leg was also created, the movement of which was controlled remotely by the computer model and Kinect. The real time communication of the model and the robotic knee was achieved through programming in Python and Arduino language. The result of this study showed that Kinect in the modelling of human kinematics and can play a significant role in the development of prosthetics and other assistive technologies.
An integrated framework for detecting suspicious behaviors in video surveillance
NASA Astrophysics Data System (ADS)
Zin, Thi Thi; Tin, Pyke; Hama, Hiromitsu; Toriu, Takashi
2014-03-01
In this paper, we propose an integrated framework for detecting suspicious behaviors in video surveillance systems which are established in public places such as railway stations, airports, shopping malls and etc. Especially, people loitering in suspicion, unattended objects left behind and exchanging suspicious objects between persons are common security concerns in airports and other transit scenarios. These involve understanding scene/event, analyzing human movements, recognizing controllable objects, and observing the effect of the human movement on those objects. In the proposed framework, multiple background modeling technique, high level motion feature extraction method and embedded Markov chain models are integrated for detecting suspicious behaviors in real time video surveillance systems. Specifically, the proposed framework employs probability based multiple backgrounds modeling technique to detect moving objects. Then the velocity and distance measures are computed as the high level motion features of the interests. By using an integration of the computed features and the first passage time probabilities of the embedded Markov chain, the suspicious behaviors in video surveillance are analyzed for detecting loitering persons, objects left behind and human interactions such as fighting. The proposed framework has been tested by using standard public datasets and our own video surveillance scenarios.
Eye Movements During Everyday Behavior Predict Personality Traits
Hoppe, Sabrina; Loetscher, Tobias; Morey, Stephanie A.; Bulling, Andreas
2018-01-01
Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements. Further analysis revealed new relations between previously neglected eye movement characteristics and personality. Our findings demonstrate a considerable influence of personality on everyday eye movement control, thereby complementing earlier studies in laboratory settings. Improving automatic recognition and interpretation of human social signals is an important endeavor, enabling innovative design of human–computer systems capable of sensing spontaneous natural user behavior to facilitate efficient interaction and personalization. PMID:29713270
Developing the human-computer interface for Space Station Freedom
NASA Technical Reports Server (NTRS)
Holden, Kritina L.
1991-01-01
For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.
Enhanced Video-Oculography System
NASA Technical Reports Server (NTRS)
Moore, Steven T.; MacDougall, Hamish G.
2009-01-01
A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.
Dodge, Somayeh; Bohrer, Gil; Weinzierl, Rolf P.; Davidson, Sarah C.; Kays, Roland; Douglas, David C.; Cruz, Sebastian; Han, J.; Brandes, David; Wikelski, Martin
2013-01-01
The movement of animals is strongly influenced by external factors in their surrounding environment such as weather, habitat types, and human land use. With advances in positioning and sensor technologies, it is now possible to capture animal locations at high spatial and temporal granularities. Likewise, scientists have an increasing access to large volumes of environmental data. Environmental data are heterogeneous in source and format, and are usually obtained at different spatiotemporal scales than movement data. Indeed, there remain scientific and technical challenges in developing linkages between the growing collections of animal movement data and the large repositories of heterogeneous remote sensing observations, as well as in the developments of new statistical and computational methods for the analysis of movement in its environmental context. These challenges include retrieval, indexing, efficient storage, data integration, and analytical techniques.
Monitoring and decision making by people in man machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.
1979-01-01
The analysis of human monitoring and decision making behavior as well as its modeling are described. Classic and optimal control theoretical, monitoring models are surveyed. The relationship between attention allocation and eye movements is discussed. As an example of applications, the evaluation of predictor displays by means of the optimal control model is explained. Fault detection involving continuous signals and decision making behavior of a human operator engaged in fault diagnosis during different operation and maintenance situations are illustrated. Computer aided decision making is considered as a queueing problem. It is shown to what extent computer aids can be based on the state of human activity as measured by psychophysiological quantities. Finally, management information systems for different application areas are mentioned. The possibilities of mathematical modeling of human behavior in complex man machine systems are also critically assessed.
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
NASA Astrophysics Data System (ADS)
Sorokoumov, P. S.; Khabibullin, T. R.; Tolstaya, A. M.
2017-01-01
The existing psychological theories associate the movement of a human eye with its reactions to external change: what we see, hear and feel. By analyzing the glance, we can compare the external human response (which shows the behavior of a person), and the natural reaction (that they actually feels). This article describes the complex for detection of visual activity and its application for evaluation of the psycho-physiological state of a person. The glasses with a camera capture all the movements of the human eye in real time. The data recorded by the camera are transmitted to the computer for processing implemented with the help of the software developed by the authors. The result is given in an informative and an understandable report, which can be used for further analysis. The complex shows a high efficiency and stable operation and can be used both, for the pedagogic personnel recruitment and for testing students during the educational process.
Huo, Xueliang; Ghovanloo, Maysam
2010-01-01
The tongue drive system (TDS) is an unobtrusive, minimally invasive, wearable and wireless tongue–computer interface (TCI), which can infer its users' intentions, represented in their volitional tongue movements, by detecting the position of a small permanent magnetic tracer attached to the users' tongues. Any specific tongue movements can be translated into user-defined commands and used to access and control various devices in the users' environments. The latest external TDS (eTDS) prototype is built on a wireless headphone and interfaced to a laptop PC and a powered wheelchair. Using customized sensor signal processing algorithms and graphical user interface, the eTDS performance was evaluated by 13 naive subjects with high-level spinal cord injuries (C2–C5) at the Shepherd Center in Atlanta, GA. Results of the human trial show that an average information transfer rate of 95 bits/min was achieved for computer access with 82% accuracy. This information transfer rate is about two times higher than the EEG-based BCIs that are tested on human subjects. It was also demonstrated that the subjects had immediate and full control over the powered wheelchair to the extent that they were able to perform complex wheelchair navigation tasks, such as driving through an obstacle course. PMID:20332552
Computer Model Predicts the Movement of Dust
NASA Technical Reports Server (NTRS)
2002-01-01
A new computer model of the atmosphere can now actually pinpoint where global dust events come from, and can project where they're going. The model may help scientists better evaluate the impact of dust on human health, climate, ocean carbon cycles, ecosystems, and atmospheric chemistry. Also, by seeing where dust originates and where it blows people with respiratory problems can get advanced warning of approaching dust clouds. 'The model is physically more realistic than previous ones,' said Mian Chin, a co-author of the study and an Earth and atmospheric scientist at Georgia Tech and the Goddard Space Flight Center (GSFC) in Greenbelt, Md. 'It is able to reproduce the short term day-to-day variations and long term inter-annual variations of dust concentrations and distributions that are measured from field experiments and observed from satellites.' The above images show both aerosols measured from space (left) and the movement of aerosols predicted by computer model for the same date (right). For more information, read New Computer Model Tracks and Predicts Paths Of Earth's Dust Images courtesy Paul Giroux, Georgia Tech/NASA Goddard Space Flight Center
Urbano, A; Babiloni, C; Onorati, P; Babiloni, F
1998-06-01
Between-electrode cross-covariances of delta (0-3 Hz)- and theta (4-7 Hz)-filtered high resolution EEG potentials related to preparation, initiation. and execution of human unilateral internally triggered one-digit movements were computed to investigate statistical dynamic coupling between these potentials. Significant (P < 0.05, Bonferroni-corrected) cross-covariances were calculated between electrodes of lateral and median scalp regions. For both delta- and theta-bandpassed potentials, covariance modeling indicated a shifting functional coupling between contralateral and ipsilateral frontal-central-parietal scalp regions and between these two regions and the median frontal-central scalp region from the preparation to the execution of the movement (P < 0.05). A maximum inward functional coupling of the contralateral with the ipsilateral frontal-central-parietal scalp region was modeled during the preparation and initiation of the movement, and a maximum outward functional coupling during the movement execution. Furthermore, for theta-bandpassed potentials, rapidly oscillating inward and outward relationships were modeled between the contralateral frontal-central-parietal scalp region and the median frontal-central scalp region across the preparation, initiation, and execution of the movement. We speculate that these cross-covariance relationships might reflect an oscillating dynamic functional coupling of primary sensorimotor and supplementary motor areas during the planning, starting, and performance of unilateral movement. The involvement of these cortical areas is supported by the observation that averaged spatially enhanced delta- and theta-bandpassed potentials were computed from the scalp regions where task-related electrical activation of primary sensorimotor areas and supplementary motor area was roughly represented.
Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction
2011-01-01
Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January...understand us. More specifically, the computer should be able to infer what we wish to see, do , and interact with through our movements, gestures, and...in depth freedom. Our system differs from the majority of other systems in that we do not use infrared, stereo-cameras, specially-constructed
Ravignani, Andrea; Olivera, Vicente Matellán; Gingras, Bruno; Hofer, Riccardo; Hernández, Carlos Rodríguez; Sonnweber, Ruth-Sophie; Fitch, W. Tecumseh
2013-01-01
The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments. PMID:23912427
Ravignani, Andrea; Matellán Olivera, Vicente; Gingras, Bruno; Hofer, Riccardo; Rodríguez Hernández, Carlos; Sonnweber, Ruth-Sophie; Fitch, W Tecumseh
2013-07-31
The possibility of achieving experimentally controlled, non-vocal acoustic production in non-human primates is a key step to enable the testing of a number of hypotheses on primate behavior and cognition. However, no device or solution is currently available, with the use of sensors in non-human animals being almost exclusively devoted to applications in food industry and animal surveillance. Specifically, no device exists which simultaneously allows: (i) spontaneous production of sound or music by non-human animals via object manipulation, (ii) systematical recording of data sensed from these movements, (iii) the possibility to alter the acoustic feedback properties of the object using remote control. We present two prototypes we developed for application with chimpanzees (Pan troglodytes) which, while fulfilling the aforementioned requirements, allow to arbitrarily associate sounds to physical object movements. The prototypes differ in sensing technology, costs, intended use and construction requirements. One prototype uses four piezoelectric elements embedded between layers of Plexiglas and foam. Strain data is sent to a computer running Python through an Arduino board. A second prototype consists in a modified Wii Remote contained in a gum toy. Acceleration data is sent via Bluetooth to a computer running Max/MSP. We successfully pilot tested the first device with a group of chimpanzees. We foresee using these devices for a range of cognitive experiments.
Decoding Individual Finger Movements from One Hand Using Human EEG Signals
Gonzalez, Jania; Ding, Lei
2014-01-01
Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies. PMID:24416360
Sandwich Hologram Interferometry For Determination Of Sacroiliac Joint Movements
NASA Astrophysics Data System (ADS)
Vukicevic, S.; Vinter, I.; Vukicevic, D.
1983-12-01
Investigations were carried out on embalmed and fresh specimens of human pelvisis with preserved lumbar spines, hip joints and all the ligaments. Specimens were tested under static vertical loading by pulsed laser interferometry. The deformations and behaviour of particular pelvic parts were interpreted by providing computer interferogram models. Results indicate rotation and tilting of the sacrum in the dorso-ventral direction and small but significant movements in the cranio-caudal direction. Sandwich holography proved to be the only applicable method when there is a combination of translation and tilt in the range of 200 μm to 1.5 mm.
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
Otolith and Vertical Canal Contributions to Dynamic Postural Control
NASA Technical Reports Server (NTRS)
Black, F. Owen
1999-01-01
The objective of this project is to determine: 1) how do normal subjects adjust postural movements in response to changing or altered otolith input, for example, due to aging? and 2) how do patients adapt postural control after altered unilateral or bilateral vestibular sensory inputs such as ablative inner ear surgery or ototoxicity, respectively? The following hypotheses are under investigation: 1) selective alteration of otolith input or abnormalities of otolith receptor function will result in distinctive spatial, frequency, and temporal patterns of head movements and body postural sway dynamics. 2) subjects with reduced, altered, or absent vertical semicircular canal receptor sensitivity but normal otolith receptor function or vice versa, should show predictable alterations of body and head movement strategies essential for the control of postural sway and movement. The effect of altered postural movement control upon compensation and/or adaptation will be determined. These experiments provide data for the development of computational models of postural control in normals, vestibular deficient subjects and normal humans exposed to unusual force environments, including orbital space flight.
A PC-based system for predicting movement from deep brain signals in Parkinson's disease.
Loukas, Constantinos; Brown, Peter
2012-07-01
There is much current interest in deep brain stimulation (DBS) of the subthalamic nucleus (STN) for the treatment of Parkinson's disease (PD). This type of surgery has enabled unprecedented access to deep brain signals in the awake human. In this paper we present an easy-to-use computer based system for recording, displaying, archiving, and processing electrophysiological signals from the STN. The system was developed for predicting self-paced hand-movements in real-time via the online processing of the electrophysiological activity of the STN. It is hoped that such a computerised system might have clinical and experimental applications. For example, those sites within the STN most relevant to the processing of voluntary movement could be identified through the predictive value of their activities with respect to the timing of future movement. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Design considerations for a real-time ocular counterroll instrument
NASA Technical Reports Server (NTRS)
Hatamian, M.; Anderson, D. J.
1983-01-01
A real-time algorithm for measuring three-dimensional movement of the human eye, especially torsional movement, is presented. As its input, the system uses images of the eyeball taken at video rate. The amount of horizontal and vertical movement is extracted using a pupil tracking technique. The torsional movement is then measured by computing the discrete cross-correlation function between the circular samples of successive images of the iris patterns and searching for the position of the peak of the function. A local least square interpolation around the peak of the cross-correlation function is used to produce nearly unbiased estimates of torsion angle with accuracy of about 3-4 arcmin. Accuracies of better than 0.03 deg are achievable in torsional measurement with SNR higher than 36 dB. Horizontal and vertical rotations of up to + or - 13 deg can occur simultaneously with torsion without introducing any appreciable error in the counterrolling measurement process.
Quantitative assessment of human motion using video motion analysis
NASA Technical Reports Server (NTRS)
Probe, John D.
1993-01-01
In the study of the dynamics and kinematics of the human body a wide variety of technologies has been developed. Photogrammetric techniques are well documented and are known to provide reliable positional data from recorded images. Often these techniques are used in conjunction with cinematography and videography for analysis of planar motion, and to a lesser degree three-dimensional motion. Cinematography has been the most widely used medium for movement analysis. Excessive operating costs and the lag time required for film development, coupled with recent advances in video technology, have allowed video based motion analysis systems to emerge as a cost effective method of collecting and analyzing human movement. The Anthropometric and Biomechanics Lab at Johnson Space Center utilizes the video based Ariel Performance Analysis System (APAS) to develop data on shirtsleeved and space-suited human performance in order to plan efficient on-orbit intravehicular and extravehicular activities. APAS is a fully integrated system of hardware and software for biomechanics and the analysis of human performance and generalized motion measurement. Major components of the complete system include the video system, the AT compatible computer, and the proprietary software.
Semireal Time Monitoring Of The Functional Movements Of The Mandible
NASA Astrophysics Data System (ADS)
Isaacson, Robert J.; Baumrind, Sheldon; Curry, Sean; Molthen, Robert A.
1983-07-01
Many branches of dental practice would benefit from the availability of a relatively accurate, precise, and efficient method for monitoring the movements of the human mandible during function. Mechanical analog systems have been utilized in the past but these are difficult to quantify, have limited accuracy due to frictional resistance of the components, and contain information only on the borders of the envelopes of possible movement of the landmarks measured (rather than on the functional paths of the landmarks which lie within their envelopes). Those electronic solutions which have been attempted thus far have been prohibitively expensive and time consuming for clinical use, have had lag times between data acquisition and display, or have involved such restrictions of freedom of motion as to render ambiguous the meaning of the data obtained. We report work aimed at developing a relatively non-restrictive semi-real time acoustical system for monitoring the functional movement of the mandible relative to the rest of the head. A set of three sparking devices is mounted to the mandibular component of a light, relatively non-constraining extra-oral harness and another set of three sparkers is attached to the harness' cranial or skull component. The sparkers are fired sequentially by a multiplexer and the sound associated with each firing is recorded by an array of three or more microphones. Computations based on the known speed of sound are used to evaluate the distances between the sparkers and the microphones. These data can then be transformed by computer to provide numeric or graphic information on the movement of selected mandibular landmarks with respect to the skull. Total elapsed time between the firing of the sparkers and the display of graphic information need not exceed 30-60 seconds using even a relatively modest modern computer.
Tongrod, Nattapong; Lokavee, Shongpun; Watthanawisuth, Natthapol; Tuantranont, Adisorn; Kerdcharoen, Teerakiat
2013-03-01
Current trends in Human-Computer Interface (HCI) have brought on a wave of new consumer devices that can track the motion of our hands. These devices have enabled more natural interfaces with computer applications. Data gloves are commonly used as input devices, equipped with sensors that detect the movements of hands and communication unit that interfaces those movements with a computer. Unfortunately, the high cost of sensor technology inevitably puts some burden to most general users. In this research, we have proposed a low-cost data glove concept based on printed polymeric sensor to make pressure and bending sensors fabricated by a consumer ink-jet printer. These sensors were realized using a conductive polymer (poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) [PEDOT:PSS]) thin film printed on glossy photo paper. Performance of these sensors can be enhanced by addition of dimethyl sulfoxide (DMSO) into the aqueous dispersion of PEDOT:PSS. The concept of surface resistance was successfully adopted for the design and fabrication of sensors. To demonstrate the printed sensors, we constructed a data glove using such sensors and developed software for real time hand tracking. Wireless networks based on low-cost Zigbee technology were used to transfer data from the glove to a computer. To our knowledge, this is the first report on low cost data glove based on paper pressure sensors. This low cost implementation of both sensors and communication network as proposed in this paper should pave the way toward a widespread implementation of data glove for real-time hand tracking applications.
Gesture controlled human-computer interface for the disabled.
Szczepaniak, Oskar M; Sawicki, Dariusz J
2017-02-28
The possibility of using a computer by a disabled person is one of the difficult problems of the human-computer interaction (HCI), while the professional activity (employment) is one of the most important factors affecting the quality of life, especially for disabled people. The aim of the project has been to propose a new HCI system that would allow for resuming employment for people who have lost the possibility of a standard computer operation. The basic requirement was to replace all functions of a standard mouse without the need of performing precise hand movements and using fingers. The Microsoft's Kinect motion controller had been selected as a device which would recognize hand movements. Several tests were made in order to create optimal working environment with the new device. The new communication system consisted of the Kinect device and the proper software had been built. The proposed system was tested by means of the standard subjective evaluations and objective metrics according to the standard ISO 9241-411:2012. The overall rating of the new HCI system shows the acceptance of the solution. The objective tests show that although the new system is a bit slower, it may effectively replace the computer mouse. The new HCI system fulfilled its task for a specific disabled person. This resulted in the ability to return to work. Additionally, the project confirmed the possibility of effective but nonstandard use of the Kinect device. Med Pr 2017;68(1):1-21. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Estimating net joint torques from kinesiological data using optimal linear system theory.
Runge, C F; Zajac, F E; Allum, J H; Risher, D W; Bryson, A E; Honegger, F
1995-12-01
Net joint torques (NJT) are frequently computed to provide insights into the motor control of dynamic biomechanical systems. An inverse dynamics approach is almost always used, whereby the NJT are computed from 1) kinematic measurements (e.g., position of the segments), 2) kinetic measurements (e.g., ground reaction forces) that are, in effect, constraints defining unmeasured kinematic quantities based on a dynamic segmental model, and 3) numerical differentiation of the measured kinematics to estimate velocities and accelerations that are, in effect, additional constraints. Due to errors in the measurements, the segmental model, and the differentiation process, estimated NJT rarely produce the observed movement in a forward simulation when the dynamics of the segmental system are inherently unstable (e.g., human walking). Forward dynamic simulations are, however, essential to studies of muscle coordination. We have developed an alternative approach, using the linear quadratic follower (LQF) algorithm, which computes the NJT such that a stable simulation of the observed movement is produced and the measurements are replicated as well as possible. The LQF algorithm does not employ constraints depending on explicit differentiation of the kinematic data, but rather employs those depending on specification of a cost function, based on quantitative assumptions about data confidence. We illustrate the usefulness of the LQF approach by using it to estimate NJT exerted by standing humans perturbed by support-surface movements. We show that unless the number of kinematic and force variables recorded is sufficiently high, the confidence that can be placed in the estimates of the NJT, obtained by any method (e.g., LQF, or the inverse dynamics approach), may be unsatisfactorily low.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Watson and Siri: The Rise of the BI Smart Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Troy Hiltbrand
Over the past few years, the industry has seen significant evolution in the area of human computer interaction. The era of the smart machines is upon us, with automation taking on a more advanced role than ever before, permeating into areas that have traditionally only been fulfilled by human beings. This movement has the potential of fundamentally altering the way that business intelligence is executed across the industry and the role that business intelligence has in all aspects of decision making.
Eye Tracking Based Control System for Natural Human-Computer Interaction
Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528
Eye Tracking Based Control System for Natural Human-Computer Interaction.
Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.
Development and evaluation of a musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.
Computer animations stimulate contagious yawning in chimpanzees
Campbell, Matthew W.; Carter, J. Devyn; Proctor, Darby; Eisenberg, Michelle L.; de Waal, Frans B. M.
2009-01-01
People empathize with fictional displays of behaviour, including those of cartoons and computer animations, even though the stimuli are obviously artificial. However, the extent to which other animals also may respond empathetically to animations has yet to be determined. Animations provide a potentially useful tool for exploring non-human behaviour, cognition and empathy because computer-generated stimuli offer complete control over variables and the ability to program stimuli that could not be captured on video. Establishing computer animations as a viable tool requires that non-human subjects identify with and respond to animations in a way similar to the way they do to images of actual conspecifics. Contagious yawning has been linked to empathy and poses a good test of involuntary identification and motor mimicry. We presented 24 chimpanzees with three-dimensional computer-animated chimpanzees yawning or displaying control mouth movements. The apes yawned significantly more in response to the yawn animations than to the controls, implying identification with the animations. These results support the phenomenon of contagious yawning in chimpanzees and suggest an empathic response to animations. Understanding how chimpanzees connect with animations, to both empathize and imitate, may help us to understand how humans do the same. PMID:19740888
Human movement tracking based on Kalman filter
NASA Astrophysics Data System (ADS)
Zhang, Yi; Luo, Yuan
2006-11-01
During the rehabilitation process of the post-stroke patients is conducted, their movements need to be localized and learned so that incorrect movement can be instantly modified or tuned. Therefore, tracking these movement becomes vital and necessary for the rehabilitative course. In the technologies of human movement tracking, the position prediction of human movement is very important. In this paper, we first analyze the configuration of the human movement system and choice of sensors. Then, The Kalman filter algorithm and its modified algorithm are proposed and to be used to predict the position of human movement. In the end, on the basis of analyzing the performance of the method, it is clear that the method described can be used to the system of human movement tracking.
Feedback and feedforward adaptation to visuomotor delay during reaching and slicing movements.
Botzer, Lior; Karniel, Amir
2013-07-01
It has been suggested that the brain and in particular the cerebellum and motor cortex adapt to represent the environment during reaching movements under various visuomotor perturbations. It is well known that significant delay is present in neural conductance and processing; however, the possible representation of delay and adaptation to delayed visual feedback has been largely overlooked. Here we investigated the control of reaching movements in human subjects during an imposed visuomotor delay in a virtual reality environment. In the first experiment, when visual feedback was unexpectedly delayed, the hand movement overshot the end-point target, indicating a vision-based feedback control. Over the ensuing trials, movements gradually adapted and became accurate. When the delay was removed unexpectedly, movements systematically undershot the target, demonstrating that adaptation occurred within the vision-based feedback control mechanism. In a second experiment designed to broaden our understanding of the underlying mechanisms, we revealed similar after-effects for rhythmic reversal (out-and-back) movements. We present a computational model accounting for these results based on two adapted forward models, each tuned for a specific modality delay (proprioception or vision), and a third feedforward controller. The computational model, along with the experimental results, refutes delay representation in a pure forward vision-based predictor and suggests that adaptation occurred in the forward vision-based predictor, and concurrently in the state-based feedforward controller. Understanding how the brain compensates for conductance and processing delays is essential for understanding certain impairments concerning these neural delays as well as for the development of brain-machine interfaces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Brain–computer interfaces: communication and restoration of movement in paralysis
Birbaumer, Niels; Cohen, Leonardo G
2007-01-01
The review describes the status of brain–computer or brain–machine interface research. We focus on non-invasive brain–computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive evaluation in well-controlled experiments. Invasive BMIs based on neuronal spike patterns, local field potentials or electrocorticogram may constitute the strategy of choice in severe cases of stroke and spinal cord paralysis. Future directions of BCI research should include the regulation of brain metabolism and blood flow and electrical and magnetic stimulation of the human brain (invasive and non-invasive). A series of studies using BOLD response regulation with functional magnetic resonance imaging (fMRI) and near infrared spectroscopy demonstrated a tight correlation between voluntary changes in brain metabolism and behaviour. PMID:17234696
Learning gestures for customizable human-computer interaction in the operating room.
Schwarz, Loren Arthur; Bigdelou, Ali; Navab, Nassir
2011-01-01
Interaction with computer-based medical devices in the operating room is often challenging for surgeons due to sterility requirements and the complexity of interventional procedures. Typical solutions, such as delegating the interaction task to an assistant, can be inefficient. We propose a method for gesture-based interaction in the operating room that surgeons can customize to personal requirements and interventional workflow. Given training examples for each desired gesture, our system learns low-dimensional manifold models that enable recognizing gestures and tracking particular poses for fine-grained control. By capturing the surgeon's movements with a few wireless body-worn inertial sensors, we avoid issues of camera-based systems, such as sensitivity to illumination and occlusions. Using a component-based framework implementation, our method can easily be connected to different medical devices. Our experiments show that the approach is able to robustly recognize learned gestures and to distinguish these from other movements.
Airborne sensors for detecting large marine debris at sea.
Veenstra, Timothy S; Churnside, James H
2012-01-01
The human eye is an excellent, general-purpose airborne sensor for detecting marine debris larger than 10 cm on or near the surface of the water. Coupled with the human brain, it can adjust for light conditions and sea-surface roughness, track persistence, differentiate color and texture, detect change in movement, and combine all of the available information to detect and identify marine debris. Matching this performance with computers and sensors is difficult at best. However, there are distinct advantages over the human eye and brain that sensors and computers can offer such as the ability to use finer spectral resolution, to work outside the spectral range of human vision, to control the illumination, to process the information in ways unavailable to the human vision system, to provide a more objective and reproducible result, to operate from unmanned aircraft, and to provide a permanent record that can be used for later analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Adaptive control for eye-gaze input system
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Yin, Hairong
2004-01-01
The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.
Comparison of Urban Human Movements Inferring from Multi-Source Spatial-Temporal Data
NASA Astrophysics Data System (ADS)
Cao, Rui; Tu, Wei; Cao, Jinzhou; Li, Qingquan
2016-06-01
The quantification of human movements is very hard because of the sparsity of traditional data and the labour intensive of the data collecting process. Recently, much spatial-temporal data give us an opportunity to observe human movement. This research investigates the relationship of city-wide human movements inferring from two types of spatial-temporal data at traffic analysis zone (TAZ) level. The first type of human movement is inferred from long-time smart card transaction data recording the boarding actions. The second type of human movement is extracted from citywide time sequenced mobile phone data with 30 minutes interval. Travel volume, travel distance and travel time are used to measure aggregated human movements in the city. To further examine the relationship between the two types of inferred movements, the linear correlation analysis is conducted on the hourly travel volume. The obtained results show that human movements inferred from smart card data and mobile phone data have a correlation of 0.635. However, there are still some non-ignorable differences in some special areas. This research not only reveals the citywide spatial-temporal human dynamic but also benefits the understanding of the reliability of the inference of human movements with big spatial-temporal data.
ADP and brucellosis indemnity systems development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, W.M.; Harlan, B.L.
1976-01-01
Our initial study of the USDA/TAHC Brucellosis Indemnity Program in Texas has shown that both the efficiency and rate of claim payments can be increased by the application of present day computer technologies. Two main factors contribute to these increases: the number of discrepancies that are caused by poor penmanship, transposition of numbers, and other human errors can be monitored and minimized; and the documented information can be indexed, sorted, and searched faster, more efficiently, and without human error. The overall flow of documentation that is used to control the movement of infected or exposed animals through commerce should bemore » studied. A new system should be designed that fully utilizes present day computer and electronic technologies.« less
Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition
NASA Astrophysics Data System (ADS)
Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso
2005-04-01
Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.
Detection of self-paced reaching movement intention from EEG signals.
Lew, Eileen; Chavarriaga, Ricardo; Silvoni, Stefano; Millán, José Del R
2012-01-01
Future neuroprosthetic devices, in particular upper limb, will require decoding and executing not only the user's intended movement type, but also when the user intends to execute the movement. This work investigates the potential use of brain signals recorded non-invasively for detecting the time before a self-paced reaching movement is initiated which could contribute to the design of practical upper limb neuroprosthetics. In particular, we show the detection of self-paced reaching movement intention in single trials using the readiness potential, an electroencephalography (EEG) slow cortical potential (SCP) computed in a narrow frequency range (0.1-1 Hz). Our experiments with 12 human volunteers, two of them stroke subjects, yield high detection rates prior to the movement onset and low detection rates during the non-movement intention period. With the proposed approach, movement intention was detected around 500 ms before actual onset, which clearly matches previous literature on readiness potentials. Interestingly, the result obtained with one of the stroke subjects is coherent with those achieved in healthy subjects, with single-trial performance of up to 92% for the paretic arm. These results suggest that, apart from contributing to our understanding of voluntary motor control for designing more advanced neuroprostheses, our work could also have a direct impact on advancing robot-assisted neurorehabilitation.
Preconditioning electromyographic data for an upper extremity model using neural networks
NASA Technical Reports Server (NTRS)
Roberson, D. J.; Fernjallah, M.; Barr, R. E.; Gonzalez, R. V.
1994-01-01
A back propagation neural network has been employed to precondition the electromyographic signal (EMG) that drives a computational model of the human upper extremity. This model is used to determine the complex relationship between EMG and muscle activation, and generates an optimal muscle activation scheme that simulates the actual activation. While the experimental and model predicted results of the ballistic muscle movement are very similar, the activation function between the start and the finish is not. This neural network preconditions the signal in an attempt to more closely model the actual activation function over the entire course of the muscle movement.
Accuracy of planar reaching movements. I. Independence of direction and extent variability.
Gordon, J; Ghilardi, M F; Ghez, C
1994-01-01
This study examined the variability in movement end points in a task in which human subjects reached to targets in different locations on a horizontal surface. The primary purpose was to determine whether patterns in the variable errors would reveal the nature and origin of the coordinate system in which the movements were planned. Six subjects moved a hand-held cursor on a digitizing tablet. Target and cursor positions were displayed on a computer screen, and vision of the hand and arm was blocked. The screen cursor was blanked during movement to prevent visual corrections. The paths of the movements were straight and thus directions were largely specified at the onset of movement. The velocity profiles were bell-shaped, and peak velocities and accelerations were scaled to target distance, implying that movement extent was also programmed in advance of the movement. The spatial distributions of movement end points were elliptical in shape. The major axes of these ellipses were systematically oriented in the direction of hand movement with respect to its initial position. This was true for both fast and slow movements, as well as for pointing movements involving rotations of the wrist joint. Using principal components analysis to compute the axes of these ellipses, we found that the eccentricity of the elliptical dispersions was uniformly greater for small than for large movements: variability along the axis of movement, representing extent variability, increased markedly but nonlinearly with distance. Variability perpendicular to the direction of movement, which results from directional errors, was generally smaller than extent variability, but it increased in proportion to the extent of the movement. Therefore, directional variability, in angular terms, was constant and independent of distance. Because the patterns of variability were similar for both slow and fast movements, as well as for movements involving different joints, we conclude that they result largely from errors in the planning process. We also argue that they cannot be simply explained as consequences of the inertial properties of the limb. Rather they provide evidence for an organizing mechanism that moves the limb along a straight path. We further conclude that reaching movements are planned in a hand-centered coordinate system, with direction and extent of hand movement as the planned parameters. Since the factors which influence directional variability are independent of those that influence extent errors, we propose that these two variables can be separately specified by the brain.
Configural processing of biological motion in human superior temporal sulcus.
Thompson, James C; Clarke, Michele; Stewart, Tennille; Puce, Aina
2005-09-28
Observers recognize subtle changes in the movements of others with relative ease. However, tracking a walking human is computationally difficult, because the degree of articulation is high and scene changes can temporarily occlude parts of the moving figure. Here, we used functional magnetic resonance imaging to test the hypothesis that the superior temporal sulcus (STS) uses form cues to aid biological movement tracking. The same 10 healthy subjects detected human gait changes in a walking mannequin in two experiments. In experiment 1, we tested the effects of configural change and occlusion. The walking mannequin was presented intact or with the limbs and torso apart in visual space and either unoccluded or occluded by a set of vertical white bars. In experiment 2, the effects of inversion and occlusion were investigated, using an intact walking mannequin. Subjects reliably detected gait changes under all stimulus conditions. The intact walker produced significantly greater activation in the STS, inferior temporal sulcus (ITS), and inferior parietal cortex relative to the apart walker, regardless of occlusion. Interestingly, STS and ITS activation to the upright versus inverted walker was not significantly different. In contrast, superior parietal lobule and parieto-occipital cortex showed greater activation to the apart relative to intact walker. In the absence of an intact body configuration, parietal cortex activity increased to the independent movements of the limbs and torso. Our data suggest that the STS may use a body configuration-based model to process biological movement, thus forming a representation that survives partial occlusion.
Peng, Zhen; Braun, Daniel A.
2015-01-01
In a previous study we have shown that human motion trajectories can be characterized by translating continuous trajectories into symbol sequences with well-defined complexity measures. Here we test the hypothesis that the motion complexity individuals generate in their movements might be correlated to the degree of creativity assigned by a human observer to the visualized motion trajectories. We asked participants to generate 55 novel hand movement patterns in virtual reality, where each pattern had to be repeated 10 times in a row to ensure reproducibility. This allowed us to estimate a probability distribution over trajectories for each pattern. We assessed motion complexity not only by the previously proposed complexity measures on symbolic sequences, but we also propose two novel complexity measures that can be directly applied to the distributions over trajectories based on the frameworks of Gaussian Processes and Probabilistic Movement Primitives. In contrast to previous studies, these new methods allow computing complexities of individual motion patterns from very few sample trajectories. We compared the different complexity measures to how a group of independent jurors rank ordered the recorded motion trajectories according to their personal creativity judgment. We found three entropic complexity measures that correlate significantly with human creativity judgment and discuss differences between the measures. We also test whether these complexity measures correlate with individual creativity in divergent thinking tasks, but do not find any consistent correlation. Our results suggest that entropic complexity measures of hand motion may reveal domain-specific individual differences in kinesthetic creativity. PMID:26733896
Brain-controlled body movement assistance devices and methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leuthardt, Eric C.; Love, Lonnie J.; Coker, Rob
Methods, devices, systems, and apparatus, including computer programs encoded on a computer storage medium, for brain-controlled body movement assistance devices. In one aspect, a device includes a brain-controlled body movement assistance device with a brain-computer interface (BCI) component adapted to be mounted to a user, a body movement assistance component operably connected to the BCI component and adapted to be worn by the user, and a feedback mechanism provided in connection with at least one of the BCI component and the body movement assistance component, the feedback mechanism being configured to output information relating to a usage session of themore » brain-controlled body movement assistance device.« less
NASA Astrophysics Data System (ADS)
Laakso, Ilkka; Kännälä, Sami; Jokela, Kari
2013-04-01
Medical staff working near magnetic resonance imaging (MRI) scanners are exposed both to the static magnetic field itself and also to electric currents that are induced in the body when the body moves in the magnetic field. However, there are currently limited data available on the induced electric field for realistic movements. This study computationally investigates the movement induced electric fields for realistic movements in the magnetic field of a 3 T MRI scanner. The path of movement near the MRI scanner is based on magnetic field measurements using a coil sensor attached to a human volunteer. Utilizing realistic models for both the motion of the head and the magnetic field of the MRI scanner, the induced fields are computationally determined using the finite-element method for five high-resolution numerical anatomical models. The results show that the time-derivative of the magnetic flux density (dB/dt) is approximately linearly proportional to the induced electric field in the head, independent of the position of the head with respect to the magnet. This supports the use of dB/dt measurements for occupational exposure assessment. For the path of movement considered herein, the spatial maximum of the induced electric field is close to the basic restriction for the peripheral nervous system and exceeds the basic restriction for the central nervous system in the international guidelines. The 99th percentile electric field is a considerably less restrictive metric for the exposure than the spatial maximum electric field; the former is typically 60-70% lower than the latter. However, the 99th percentile electric field may exceed the basic restriction for dB/dt values that can be encountered during tasks commonly performed by MRI workers. It is also shown that the movement-induced eddy currents may reach magnitudes that could electrically stimulate the vestibular system, which could play a significant role in the generation of vertigo-like sensations reported by people moving in a strong static magnetic field.
Tandem internal models execute motor learning in the cerebellum.
Honda, Takeru; Nagao, Soichi; Hashimoto, Yuji; Ishikawa, Kinya; Yokota, Takanori; Mizusawa, Hidehiro; Ito, Masao
2018-06-25
In performing skillful movement, humans use predictions from internal models formed by repetition learning. However, the computational organization of internal models in the brain remains unknown. Here, we demonstrate that a computational architecture employing a tandem configuration of forward and inverse internal models enables efficient motor learning in the cerebellum. The model predicted learning adaptations observed in hand-reaching experiments in humans wearing a prism lens and explained the kinetic components of these behavioral adaptations. The tandem system also predicted a form of subliminal motor learning that was experimentally validated after training intentional misses of hand targets. Patients with cerebellar degeneration disease showed behavioral impairments consistent with tandemly arranged internal models. These findings validate computational tandemization of internal models in motor control and its potential uses in more complex forms of learning and cognition. Copyright © 2018 the Author(s). Published by PNAS.
Critical Postmodernism in Human Movement, Physical Education, and Sport.
ERIC Educational Resources Information Center
Fernandez-Balboa, Juan-Miguel, Ed.
This collection of texts proposes alternative ways to examine human movement, discussing the traditional role of human movement professionals as agents of social and cultural reproduction. Part 1, The Human Movement Profession in the Postmodern Era: Critical Analyses, includes the first 10 chapters: (1) "Introduction: The Human Movement…
Neurobionics and the brain-computer interface: current applications and future horizons.
Rosenfeld, Jeffrey V; Wong, Yan Tat
2017-05-01
The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.
System for assisted mobility using eye movements based on electrooculography.
Barea, Rafael; Boquete, Luciano; Mazo, Manuel; López, Elena
2002-12-01
This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
Research on the position estimation of human movement based on camera projection
NASA Astrophysics Data System (ADS)
Yi, Zhang; Yuan, Luo; Hu, Huosheng
2005-06-01
During the rehabilitation process of the post-stroke patients is conducted, their movements need to be localized and learned so that incorrect movement can be instantly modified or tuned. Therefore, tracking these movement becomes vital and necessary for the rehabilitative course. During human movement tracking, the position estimation of human movement is very important. In this paper, the character of the human movement system is first analyzed. Next, camera and inertial sensor are used to respectively measure the position of human movement, and the Kalman filter algorithm is proposed to fuse the two measurement to get a optimization estimation of the position. In the end, the performance of the method is analyzed.
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-03-28
A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.
Equilibrium-point control hypothesis examined by measured arm stiffness during multijoint movement.
Gomi, H; Kawato
1996-04-05
For the last 20 years, it has been hypothesized that well-coordinated, multijoint movements are executed without complex computation by the brain, with the use of springlike muscle properties and peripheral neural feedback loops. However, it has been technically and conceptually difficult to examine this "equilibrium-point control" hypothesis directly in physiological or behavioral experiments. A high-performance manipulandum was developed and used here to measure human arm stiffness, the magnitude of which during multijoint movement is important for this hypothesis. Here, the equilibrium-point trajectory was estimated from the measured stiffness, the actual trajectory, and the generated torque. Its velocity profile differed from that of the actual trajectory. These results argue against the hypothesis that the brain sends as a motor command only an equilibrium-point trajectory similar to the actual trajectory.
Ottosen, S R; Nicholls, J I; Steiner, J C
1999-06-01
This study was designed to compare the changes in canal configuration resulting from instrumentation by either Profile or Naviflex instruments. Forty mesial canals in extracted human molar teeth were embedded and sectioned at two root levels. Reassembled teeth were instrumented with a modified crown-down technique as described in the Profile training video for Profile files and in a similar manner for Naviflex instruments. Superimposed pre- and postinstrumented cross-sectional root images were projected, traced, and scanned into a computer for analysis. Canal movement, in relation to the furca, and canal area change were recorded. The results showed no significant difference in canal center movement or canal area change between the Profile or Naviflex groups. The degree of canal curvature had no effect on canal center movement or canal area change.
Chang, Won-Du; Cha, Ho-Seung; Im, Chang-Hwan
2016-01-01
This paper introduces a method to remove the unwanted interdependency between vertical and horizontal eye-movement components in electrooculograms (EOGs). EOGs have been widely used to estimate eye movements without a camera in a variety of human-computer interaction (HCI) applications using pairs of electrodes generally attached either above and below the eye (vertical EOG) or to the left and right of the eyes (horizontal EOG). It has been well documented that the vertical EOG component has less stability than the horizontal EOG one, making accurate estimation of the vertical location of the eyes difficult. To address this issue, an experiment was designed in which ten subjects participated. Visual inspection of the recorded EOG signals showed that the vertical EOG component is highly influenced by horizontal eye movements, whereas the horizontal EOG is rarely affected by vertical eye movements. Moreover, the results showed that this interdependency could be effectively removed by introducing an individual constant value. It is therefore expected that the proposed method can enhance the overall performance of practical EOG-based eye-tracking systems. PMID:26907271
Kim, Sung-Phil; Simeral, John D; Hochberg, Leigh R; Donoghue, John P; Black, Michael J
2010-01-01
Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. PMID:19015583
Representation of virtual arm movements in precuneus.
Dohle, Christian; Stephan, Klaus Martin; Valvoda, Jakob T; Hosseiny, Omid; Tellmann, Lutz; Kuhlen, Torsten; Seitz, Rüdiger J; Freund, Hans-Joachim
2011-02-01
Arm movements can easily be adapted to different biomechanical constraints. However, the cortical representation of the processing of visual input and its transformation into motor commands remains poorly understood. In a visuo-motor dissociation paradigm, subjects were presented with a 3-D computer-graphical representation of a human arm, presenting movements of the subjects' right arm either as right or left arm. In order to isolate possible effects of coordinate transformations, coordinate mirroring at the body midline was implemented independently. In each of the resulting four conditions, 10 normal, right-handed subjects performed three runs of circular movements, while being scanned with O(15)-Butanol-PET. Kinematic analysis included orientation and accuracy of a fitted ellipsoid trajectory. Imaging analysis was performed with SPM 99 with activations threshold at P < 0.0001 (not corrected). The shape of the trajectory was dependent on the laterality of the arm, irrespective of movement mirroring, and accompanied by a robust activation difference in the contralateral precuneus. Movement mirroring decreased movement accuracy, which was related to increased activation in the left insula. Those two movement conditions that cannot be observed in reality were related to an activation focus at the left middle temporal gyrus, but showed no influence on movement kinematics. These findings demonstrate the prominent role of the precuneus for mediating visuo-motor transformations and have implications for the use of mirror therapy and virtual reality techniques, especially avatars, such as Nintendo Wii in neurorehabilitation.
Restoring cortical control of functional movement in a human with quadriplegia.
Bouton, Chad E; Shaikhouni, Ammar; Annetta, Nicholas V; Bockbrader, Marcia A; Friedenberg, David A; Nielson, Dylan M; Sharma, Gaurav; Sederberg, Per B; Glenn, Bradley C; Mysiw, W Jerry; Morgan, Austin G; Deogaonkar, Milind; Rezai, Ali R
2016-05-12
Millions of people worldwide suffer from diseases that lead to paralysis through disruption of signal pathways between the brain and the muscles. Neuroprosthetic devices are designed to restore lost function and could be used to form an electronic 'neural bypass' to circumvent disconnected pathways in the nervous system. It has previously been shown that intracortically recorded signals can be decoded to extract information related to motion, allowing non-human primates and paralysed humans to control computers and robotic arms through imagined movements. In non-human primates, these types of signal have also been used to drive activation of chemically paralysed arm muscles. Here we show that intracortically recorded signals can be linked in real-time to muscle activation to restore movement in a paralysed human. We used a chronically implanted intracortical microelectrode array to record multiunit activity from the motor cortex in a study participant with quadriplegia from cervical spinal cord injury. We applied machine-learning algorithms to decode the neuronal activity and control activation of the participant's forearm muscles through a custom-built high-resolution neuromuscular electrical stimulation system. The system provided isolated finger movements and the participant achieved continuous cortical control of six different wrist and hand motions. Furthermore, he was able to use the system to complete functional tasks relevant to daily living. Clinical assessment showed that, when using the system, his motor impairment improved from the fifth to the sixth cervical (C5-C6) to the seventh cervical to first thoracic (C7-T1) level unilaterally, conferring on him the critical abilities to grasp, manipulate, and release objects. This is the first demonstration to our knowledge of successful control of muscle activation using intracortically recorded signals in a paralysed human. These results have significant implications in advancing neuroprosthetic technology for people worldwide living with the effects of paralysis.
Weightman, Andrew; Preston, Nick; Levesley, Martin; Bhakta, Bipin; Holt, Raymond; Mon-Williams, Mark
2014-05-01
To compare upper limb kinematics of children with spastic cerebral palsy (CP) using a passive rehabilitation joystick with those of adults and able-bodied children, to better understand the design requirements of computer-based rehabilitation devices. A blocked comparative study involving seven children with spastic CP, nine able-bodied adults and nine able-bodied children, using a joystick system to play a computer game whilst the kinematics of their upper limb were recorded. The translational kinematics of the joystick's end point and the participant's shoulder movement (protraction/retraction) and elbow rotational kinematics (flexion/extension) were analysed for each group. Children with spastic CP matched their able-bodied peers in the time taken to complete the computer task, but this was due to a failure to adhere to the task instructions of travelling along a prescribed straight line when moving between targets. The spastic CP group took longer to initiate the first movement, which showed jerkier trajectories and demonstrated qualitatively different movement patterns when using the joystick, with shoulder movements that were significantly of greater magnitude than the able-bodied participants. Children with spastic CP generate large shoulder and hence trunk movements when using a joystick to undertake computer-generated arm exercises. This finding has implications for the development and use of assistive technologies to encourage exercise and the instructions given to users of such systems. A kinematic analysis of upper limb function of children with CP when using joystick devices is presented. Children with CP may use upper body movements to compensate for limitations in voluntary shoulder and elbow movements when undertaking computer games designed to encourage the practice of arm movement. The design of rehabilitative computer exercise systems should consider movement of the torso/shoulder as it may have implications for the quality of therapy in the rehabilitation of the upper limb in children with CP.
Learning optimal eye movements to unusual faces
Peterson, Matthew F.; Eckstein, Miguel P.
2014-01-01
Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712
RCA: A route city attraction model for air passengers
NASA Astrophysics Data System (ADS)
Huang, Feihu; Xiong, Xi; Peng, Jian; Guo, Bing; Tong, Bo
2018-02-01
Human movement pattern is a research hotspot of social computing and has practical values in various fields, such as traffic planning. Previous studies mainly focus on the travel activities of human beings on the ground rather than those in the air. In this paper, we use the reservation records of air passengers to explore air passengers' movement characteristics. After analyzing the effect of the route-trip length on the throughput, we find that most passengers eventually return to their original departure city and that the mobility of air passengers is not related to the route length. Based on these characteristics, we present a route city attraction (RCA) model, in which GDP or population is considered for the calculation of the attraction. The sub models of our RCA model show the better prediction performance of throughput than the radiation model and the gravity model.
High-fidelity, low-cost, automated method to assess laparoscopic skills objectively.
Gray, Richard J; Kahol, Kanav; Islam, Gazi; Smith, Marshall; Chapital, Alyssa; Ferrara, John
2012-01-01
We sought to define the extent to which a motion analysis-based assessment system constructed with simple equipment could measure technical skill objectively and quantitatively. An "off-the-shelf" digital video system was used to capture the hand and instrument movement of surgical trainees (beginner level = PGY-1, intermediate level = PGY-3, and advanced level = PGY-5/fellows) while they performed a peg transfer exercise. The video data were passed through a custom computer vision algorithm that analyzed incoming pixels to measure movement smoothness objectively. The beginner-level group had the poorest performance, whereas those in the advanced group generated the highest scores. Intermediate-level trainees scored significantly (p < 0.04) better than beginner trainees. Advanced-level trainees scored significantly better than intermediate-level trainees and beginner-level trainees (p < 0.04 and p < 0.03, respectively). A computer vision-based analysis of surgical movements provides an objective basis for technical expertise-level analysis with construct validity. The technology to capture the data is simple, low cost, and readily available, and it obviates the need for expert human assessment in this setting. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Somatosensory Contribution to the Initial Stages of Human Motor Learning
Bernardi, Nicolò F.; Darainy, Mohammad
2015-01-01
The early stages of motor skill acquisition are often marked by uncertainty about the sensory and motor goals of the task, as is the case in learning to speak or learning the feel of a good tennis serve. Here we present an experimental model of this early learning process, in which targets are acquired by exploration and reinforcement rather than sensory error. We use this model to investigate the relative contribution of motor and sensory factors to human motor learning. Participants make active reaching movements or matched passive movements to an unseen target using a robot arm. We find that learning through passive movements paired with reinforcement is comparable with learning associated with active movement, both in terms of magnitude and durability, with improvements due to training still observable at a 1 week retest. Motor learning is also accompanied by changes in somatosensory perceptual acuity. No stable changes in motor performance are observed for participants that train, actively or passively, in the absence of reinforcement, or for participants who are given explicit information about target position in the absence of somatosensory experience. These findings indicate that the somatosensory system dominates learning in the early stages of motor skill acquisition. SIGNIFICANCE STATEMENT The research focuses on the initial stages of human motor learning, introducing a new experimental model that closely approximates the key features of motor learning outside of the laboratory. The finding indicates that it is the somatosensory system rather than the motor system that dominates learning in the early stages of motor skill acquisition. This is important given that most of our computational models of motor learning are based on the idea that learning is motoric in origin. This is also a valuable finding for rehabilitation of patients with limited mobility as it shows that reinforcement in conjunction with passive movement results in benefits to motor learning that are as great as those observed for active movement training. PMID:26490869
A multimodal dataset for authoring and editing multimedia content: The MAMEM project.
Nikolopoulos, Spiros; Petrantonakis, Panagiotis C; Georgiadis, Kostas; Kalaganis, Fotis; Liaros, Georgios; Lazarou, Ioulietta; Adam, Katerina; Papazoglou-Chalikias, Anastasios; Chatzilari, Elisavet; Oikonomou, Vangelis P; Kumar, Chandan; Menges, Raphael; Staab, Steffen; Müller, Daniel; Sengupta, Korok; Bostantjopoulou, Sevasti; Katsarou, Zoe; Zeilig, Gabi; Plotnik, Meir; Gotlieb, Amihai; Kizoni, Racheli; Fountoukidou, Sofia; Ham, Jaap; Athanasiou, Dimitrios; Mariakaki, Agnes; Comanducci, Dario; Sabatini, Edoardo; Nistico, Walter; Plank, Markus; Kompatsiaris, Ioannis
2017-12-01
We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
The enchanted loom. [Book on evolution of intelligence
NASA Technical Reports Server (NTRS)
Jastrow, R.
1981-01-01
The evolution of intelligence began with the movement of Crossopterygian fish onto land. The eventual appearance of large dinosaurs eliminated all but the smallest of mammalian creatures, with the survivors forced to move only nocturnally, when enhanced olfactory and aural faculties were favored and involved a larger grey matter/body mass ratio than possessed by the dinosaurs. Additionally, the mammals made comparisons between the inputs of various senses, implying the presence of significant memory capacity and an ability to abstract survival information. More complex behavior occurred with the advent of tree dwellers (forward-looking eyes), hands, color vision, and the ability to grip and manipulate objects. An extra pound of brain evolved in the human skull in less than a million years. The neural processes that can lead to an action by a creature with a brain are mimicked by the basic AND and OR gates in computers, which are rapidly approaching the circuit density of the human brain. It is suggested that humans will eventually produce computers of higher intelligence than people possess, and computer spacecraft, alive in an electronic sense, will travel outward to explore the universe.
Wang, Nancy X. R.; Olson, Jared D.; Ojemann, Jeffrey G.; Rao, Rajesh P. N.; Brunton, Bingni W.
2016-01-01
Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018
Subthalamic nucleus detects unnatural android movement.
Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi
2017-12-19
An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.
Kinect-based sign language recognition of static and dynamic hand movements
NASA Astrophysics Data System (ADS)
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
2017-02-01
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Encoding of speed and direction of movement in the human supplementary motor area
Tankus, Ariel; Yeshurun, Yehezkel; Flash, Tamar; Fried, Itzhak
2010-01-01
Object The supplementary motor area (SMA) plays an important role in planning, initiation, and execution of motor acts. Patients with SMA lesions are impaired in various kinematic parameters, such as velocity and duration of movement. However, the relationships between neuronal activity and these parameters in the human brain have not been fully characterized. This is a study of single-neuron activity during a continuous volitional motor task, with the goal of clarifying these relationships for SMA neurons and other frontal lobe regions in humans. Methods The participants were 7 patients undergoing evaluation for epilepsy surgery requiring implantation of intracranial depth electrodes. Single-unit recordings were conducted while the patients played a computer game involving movement of a cursor in a simple maze. Results In the SMA proper, most of the recorded units exhibited a monotonic relationship between the unit firing rate and hand motion speed. The vast majority of SMA proper units with this property showed an inverse relation, that is, firing rate decrease with speed increase. In addition, most of the SMA proper units were selective to the direction of hand motion. These relationships were far less frequent in the pre-SMA, anterior cingulate gyrus, and orbitofrontal cortex. Conclusions The findings suggest that the SMA proper takes part in the control of kinematic parameters of end-effector motion, and thus lend support to the idea of connecting neuroprosthetic devices to the human SMA. PMID:19231930
Adaptive Variability in Skilled Human Movements
NASA Astrophysics Data System (ADS)
Kudo, Kazutoshi; Ohtsuki, Tatsuyuki
Human movements are produced in variable external/internal environments. Because of this variability, the same motor command can result in quite different movement patterns. Therefore, to produce skilled movements humans must coordinate the variability, not try to exclude it. In addition, because human movements are produced in redundant and complex systems, a combination of variability should be observed in different anatomical/physiological levels. In this paper, we introduce our research about human movement variability that shows remarkable coordination among components, and between organism and environment. We also introduce nonlinear dynamical models that can describe a variety of movements as a self-organization of a dynamical system, because the dynamical systems approach is a major candidate to understand the principle underlying organization of varying systems with huge degrees-of-freedom.
Extracting Depth From Motion Parallax in Real-World and Synthetic Displays
NASA Technical Reports Server (NTRS)
Hecht, Heiko; Kaiser, Mary K.; Aiken, William; Null, Cynthia H. (Technical Monitor)
1994-01-01
In psychophysical studies on human sensitivity to visual motion parallax (MP), the use of computer displays is pervasive. However, a number of potential problems are associated with such displays: cue conflicts arise when observers accommodate to the screen surface, and observer head and body movements are often not reflected in the displays. We investigated observers' sensitivity to depth information in MP (slant, depth order, relative depth) using various real-world displays and their computer-generated analogs. Angle judgments of real-world stimuli were consistently superior to judgments that were based on computer-generated stimuli. Similar results were found for perceived depth order and relative depth. Perceptual competence of observers tends to be underestimated in research that is based on computer generated displays. Such findings cannot be generalized to more realistic viewing situations.
Mechanisms of human cerebellar dysmetria: experimental evidence and current conceptual bases
Manto, Mario
2009-01-01
The human cerebellum contains more neurons than any other region in the brain and is a major actor in motor control. Cerebellar circuitry is unique by its stereotyped architecture and its modular organization. Understanding the motor codes underlying the organization of limb movement and the rules of signal processing applied by the cerebellar circuits remains a major challenge for the forthcoming decades. One of the cardinal deficits observed in cerebellar patients is dysmetria, designating the inability to perform accurate movements. Patients overshoot (hypermetria) or undershoot (hypometria) the aimed target during voluntary goal-directed tasks. The mechanisms of cerebellar dysmetria are reviewed, with an emphasis on the roles of cerebellar pathways in controlling fundamental aspects of movement control such as anticipation, timing of motor commands, sensorimotor synchronization, maintenance of sensorimotor associations and tuning of the magnitudes of muscle activities. An overview of recent advances in our understanding of the contribution of cerebellar circuitry in the elaboration and shaping of motor commands is provided, with a discussion on the relevant anatomy, the results of the neurophysiological studies, and the computational models which have been proposed to approach cerebellar function. PMID:19364396
Functional Characterization of the Human Speech Articulation Network.
Basilakos, Alexandra; Smith, Kimberly G; Fillmore, Paul; Fridriksson, Julius; Fedorenko, Evelina
2018-05-01
A number of brain regions have been implicated in articulation, but their precise computations remain debated. Using functional magnetic resonance imaging, we examine the degree of functional specificity of articulation-responsive brain regions to constrain hypotheses about their contributions to speech production. We find that articulation-responsive regions (1) are sensitive to articulatory complexity, but (2) are largely nonoverlapping with nearby domain-general regions that support diverse goal-directed behaviors. Furthermore, premotor articulation regions show selectivity for speech production over some related tasks (respiration control), but not others (nonspeech oral-motor [NSO] movements). This overlap between speech and nonspeech movements concords with electrocorticographic evidence that these regions encode articulators and their states, and with patient evidence whereby articulatory deficits are often accompanied by oral-motor deficits. In contrast, the superior temporal regions show strong selectivity for articulation relative to nonspeech movements, suggesting that these regions play a specific role in speech planning/production. Finally, articulation-responsive portions of posterior inferior frontal gyrus show some selectivity for articulation, in line with the hypothesis that this region prepares an articulatory code that is passed to the premotor cortex. Taken together, these results inform the architecture of the human articulation system.
ERIC Educational Resources Information Center
Tinning, Richard
2011-01-01
Across the full range of human movement studies and their many sub-disciplines, established institutional practices and forms of pedagogy are used to (re)produce valued knowledge about human movement. "Pedagogy and Human Movement" explores this pedagogy in detail to reveal its applications and meanings within individual fields. This unique book…
Inai, Takuma; Takabayashi, Tomoya; Edama, Mutsuaki; Kubo, Masayoshi
2018-04-27
The association between repetitive hip moment impulse and the progression of hip osteoarthritis is a recently recognized area of study. A sit-to-stand movement is essential for daily life and requires hip extension moment. Although a change in the sit-to-stand movement time may influence the hip moment impulse in the sagittal plane, this effect has not been examined. The purpose of this study was to clarify the relationship between sit-to-stand movement time and hip moment impulse in the sagittal plane. Twenty subjects performed the sit-to-stand movement at a self-selected natural speed. The hip, knee, and ankle joint angles obtained from experimental trials were used to perform two computer simulations. In the first simulation, the actual sit-to-stand movement time obtained from the experiment was entered. In the second simulation, sit-to-stand movement times ranging from 0.5 to 4.0 s at intervals of 0.25 s were entered. Hip joint moments and hip moment impulses in the sagittal plane during sit-to-stand movements were calculated for both computer simulations. The reliability of the simulation model was confirmed, as indicated by the similarities in the hip joint moment waveforms (r = 0.99) and the hip moment impulses in the sagittal plane between the first computer simulation and the experiment. In the second computer simulation, the hip moment impulse in the sagittal plane decreased with a decrease in the sit-to-stand movement time, although the peak hip extension moment increased with a decrease in the movement time. These findings clarify the association between the sit-to-stand movement time and hip moment impulse in the sagittal plane and may contribute to the prevention of the progression of hip osteoarthritis.
Dreyer, Jakob K.; Jennings, Katie A.; Syed, Emilie C. J.; Wade-Martins, Richard; Cragg, Stephanie J.; Bolam, J. Paul; Magill, Peter J.
2016-01-01
Midbrain dopaminergic neurons are essential for appropriate voluntary movement, as epitomized by the cardinal motor impairments arising in Parkinson’s disease. Understanding the basis of such motor control requires understanding how the firing of different types of dopaminergic neuron relates to movement and how this activity is deciphered in target structures such as the striatum. By recording and labeling individual neurons in behaving mice, we show that the representation of brief spontaneous movements in the firing of identified midbrain dopaminergic neurons is cell-type selective. Most dopaminergic neurons in the substantia nigra pars compacta (SNc), but not in ventral tegmental area or substantia nigra pars lateralis, consistently represented the onset of spontaneous movements with a pause in their firing. Computational modeling revealed that the movement-related firing of these dopaminergic neurons can manifest as rapid and robust fluctuations in striatal dopamine concentration and receptor activity. The exact nature of the movement-related signaling in the striatum depended on the type of dopaminergic neuron providing inputs, the striatal region innervated, and the type of dopamine receptor expressed by striatal neurons. Importantly, in aged mice harboring a genetic burden relevant for human Parkinson’s disease, the precise movement-related firing of SNc dopaminergic neurons and the resultant striatal dopamine signaling were lost. These data show that distinct dopaminergic cell types differentially encode spontaneous movement and elucidate how dysregulation of their firing in early Parkinsonism can impair their effector circuits. PMID:27001837
Cortex Inspired Model for Inverse Kinematics Computation for a Humanoid Robotic Finger
Gentili, Rodolphe J.; Oh, Hyuk; Molina, Javier; Reggia, James A.; Contreras-Vidal, José L.
2013-01-01
In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands. PMID:23366569
Ulloa, Antonio; Bullock, Daniel
2003-10-01
We developed a neural network model to simulate temporal coordination of human reaching and grasping under variable initial grip apertures and perturbations of object size and object location/orientation. The proposed model computes reach-grasp trajectories by continuously updating vector positioning commands. The model hypotheses are (1) hand/wrist transport, grip aperture, and hand orientation control modules are coupled by a gating signal that fosters synchronous completion of the three sub-goals. (2) Coupling from transport and orientation velocities to aperture control causes maximum grip apertures that scale with these velocities and exceed object size. (3) Part of the aperture trajectory is attributable to an aperture-reducing passive biomechanical effect that is stronger for larger apertures. (4) Discrepancies between internal representations of targets partially inhibit the gating signal, leading to movement time increases that compensate for perturbations. Simulations of the model replicate key features of human reach-grasp kinematics observed under three experimental protocols. Our results indicate that no precomputation of component movement times is necessary for online temporal coordination of the components of reaching and grasping.
Rowe, P J; Crosbie, J; Fowler, V; Durward, B; Baer, G
1999-05-01
This paper reports the development, construction and use of a new system for the measurement of linear kinematics in one, two or three dimensions. The system uses a series of rotary shaft encoders and inelastic tensioned strings to measure the linear displacement of key anatomical points in space. The system is simple, inexpensive, portable, accurate and flexible. It is therefore suitable for inclusion in a variety of motion analysis studies. Details of the construction, calibration and interfacing of the device to an IBM PC computer are given as is a full mathematical description of the appropriate measurement theory for one, two and three dimensions. Examples of the results obtained from the device during gait, running, rising to stand, sitting down and pointing with the upper limb are given. Finally it is proposed that, provided the constraints of the system are considered, this method has the potential to measure a variety of functional human movements simply and inexpensively and may therefore be a valuable addition to the methods available to the motion scientist.
The speed–curvature power law in Drosophila larval locomotion
2016-01-01
We report the discovery that the locomotor trajectories of Drosophila larvae follow the power-law relationship between speed and curvature previously found in the movements of human and non-human primates. Using high-resolution behavioural tracking in controlled but naturalistic sensory environments, we tested the law in maggots tracing different trajectory types, from reaching-like movements to scribbles. For most but not all flies, we found that the law holds robustly, with an exponent close to three-quarters rather than to the usual two-thirds found in almost all human situations, suggesting dynamic effects adding on purely kinematic constraints. There are different hypotheses for the origin of the law in primates, one invoking cortical computations, another viscoelastic muscle properties coupled with central pattern generators. Our findings are consistent with the latter view and demonstrate that the law is possible in animals with nervous systems orders of magnitude simpler than in primates. Scaling laws might exist because natural selection favours processes that remain behaviourally efficient across a wide range of neural and body architectures in distantly related species. PMID:28120807
The speed-curvature power law in Drosophila larval locomotion.
Zago, Myrka; Lacquaniti, Francesco; Gomez-Marin, Alex
2016-10-01
We report the discovery that the locomotor trajectories of Drosophila larvae follow the power-law relationship between speed and curvature previously found in the movements of human and non-human primates. Using high-resolution behavioural tracking in controlled but naturalistic sensory environments, we tested the law in maggots tracing different trajectory types, from reaching-like movements to scribbles. For most but not all flies, we found that the law holds robustly, with an exponent close to three-quarters rather than to the usual two-thirds found in almost all human situations, suggesting dynamic effects adding on purely kinematic constraints. There are different hypotheses for the origin of the law in primates, one invoking cortical computations, another viscoelastic muscle properties coupled with central pattern generators. Our findings are consistent with the latter view and demonstrate that the law is possible in animals with nervous systems orders of magnitude simpler than in primates. Scaling laws might exist because natural selection favours processes that remain behaviourally efficient across a wide range of neural and body architectures in distantly related species. © 2016 The Authors.
Human Visuospatial Updating After Passive Translations In Three-Dimensional Space
Klier, Eliana M.; Hess, Bernhard J. M.; Angelaki, Dora E.
2013-01-01
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63 and 150cm in front of the cyclopean eye) as they moved 10cm left, right, up, down, forward or backward, while fixating a head-fixed target at 53cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84±0.28 (mean±SD), as compared to 0.51±0.33 for downward and 1.05±0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12±0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and inter-subject variabilities were smallest for near targets. Thus, in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy. PMID:18256164
Page, Alvaro; de Rosario, Helios; Gálvez, José A; Mata, Vicente
2011-02-24
We propose to model planar movements between two human segments by means of rolling-without-slipping kinematic pairs. We compute the path traced by the instantaneous center of rotation (ICR) as seen from the proximal and distal segments, thus obtaining the fixed and moving centrodes, respectively. The joint motion is then represented by the rolling-without-slipping of one centrode on the other. The resulting joint kinematic model is based on the real movement and accounts for nonfixed axes of rotation; therefore it could improve current models based on revolute pairs in those cases where joint movement implies displacement of the ICR. Previous authors have used the ICR to characterize human joint motion, but they only considered the fixed centrode. Such an approach is not adequate for reproducing motion because the fixed centrode by itself does not convey information about body position. The combination of the fixed and moving centrodes gathers the kinematic information needed to reproduce the position and velocities of moving bodies. To illustrate our method, we applied it to the flexion-extension movement of the head relative to the thorax. The model provides a good estimation of motion both for position variables (mean R(pos)=0.995) and for velocities (mean R(vel)=0.958). This approach is more realistic than other models of neck motion based on revolute pairs, such as the dual-pivot model. The geometry of the centrodes can provide some information about the nature of the movement. For instance, the ascending and descending curves of the fixed centrode suggest a sequential movement of the cervical vertebrae. Copyright © 2010 Elsevier Ltd. All rights reserved.
Altered sense of Agency in children with spastic cerebral palsy
2011-01-01
Background Children diagnosed with spastic Cerebral Palsy (CP) often show perceptual and cognitive problems, which may contribute to their functional deficit. Here we investigated if altered ability to determine whether an observed movement is performed by themselves (sense of agency) contributes to the motor deficit in children with CP. Methods Three groups; 1) CP children, 2) healthy peers, and 3) healthy adults produced straight drawing movements on a pen-tablet which was not visible for the subjects. The produced movement was presented as a virtual moving object on a computer screen. Subjects had to evaluate after each trial whether the movement of the object on the computer screen was generated by themselves or by a computer program which randomly manipulated the visual feedback by angling the trajectories 0, 5, 10, 15, 20 degrees away from target. Results Healthy adults executed the movements in 310 seconds, whereas healthy children and especially CP children were significantly slower (p < 0.002) (on average 456 seconds and 543 seconds respectively). There was also a statistical difference between the healthy and age matched CP children (p = 0.037). When the trajectory of the object generated by the computer corresponded to the subject's own movements all three groups reported that they were responsible for the movement of the object. When the trajectory of the object deviated by more than 10 degrees from target, healthy adults and children more frequently than CP children reported that the computer was responsible for the movement of the object. CP children consequently also attempted to compensate more frequently from the perturbation generated by the computer. Conclusions We conclude that CP children have a reduced ability to determine whether movement of a virtual moving object is caused by themselves or an external source. We suggest that this may be related to a poor integration of their intention of movement with visual and proprioceptive information about the performed movement and that altered sense of agency may be an important functional problem in children with CP. PMID:22129483
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Murata, Atsuo; Fukunaga, Daichi
2018-04-01
This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.
Tomiak, Tomasz; Abramovych, Tetiana I.; Gorkovenko, Andriy V.; Vereshchaka, Inna V.; Mishchenko, Viktor S.; Dornowski, Marcin; Kostyukov, Alexander I.
2016-01-01
Slow circular movements of the hand with a fixed wrist joint that were produced in a horizontal plane under visual guidance during conditions of action of the elastic load directed tangentially to the movement trajectory were studied. The positional dependencies of the averaged surface EMGs in the muscles of the elbow and shoulder joints were compared for four possible combinations in the directions of load and movements. The EMG intensities were largely correlated with the waves of the force moment computed for a corresponding joint in the framework of a simple geometrical model of the system: arm - experimental setup. At the same time, in some cases the averaged EMGs exit from the segments of the trajectory restricted by the force moment singular points (FMSPs), in which the moments exhibited altered signs. The EMG activities display clear differences for the eccentric and concentric zones of contraction that are separated by the joint angle singular points (JASPs), which present extreme at the joint angle traces. We assumed that the modeled patterns of FMSPs and JASPs may be applied for an analysis of the synergic interaction between the motor commands arriving at different muscles in arbitrary two-joint movements. PMID:27375496
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
Design and Control of a New Biomimetic Transfemoral Knee Prosthesis Using an Echo-Control Scheme
2018-01-01
Passive knee prostheses require a significant amount of additional metabolic energy to carry out a gait cycle, therefore affecting the natural human walk performance. Current active knee prostheses are still limited because they do not reply with accuracy of the natural human knee movement, and the time response is relatively large. This paper presents the design and control of a new biomimetic-controlled transfemoral knee prosthesis based on a polycentric-type mechanism. The aim was to develop a knee prosthesis able to provide additional power and to mimic with accuracy of the natural human knee movement using a stable control strategy. The design of the knee mechanism was obtained from the body-guidance kinematics synthesis based on real human walking patterns obtained from computer vision and 3D reconstruction. A biomechanical evaluation of the synthesized prosthesis was then carried out. For the activation and control of the prosthesis, an echo-control strategy was proposed and developed. In this echo-control strategy, the sound side leg is sensed and synchronized with the activation of the knee prosthesis. An experimental prototype was built and evaluated in a test rig. The results revealed that the prosthetic knee is able to mimic the biomechanics of the human knee. PMID:29854368
NASA Astrophysics Data System (ADS)
Stapley, Paul; Pozzo, Thierry
In normal gravity conditions the execution of voluntary movement involves the displacement of body segments as well as the maintenance of a stable reference value for equilibrium control. It has been suggested that centre of mass (CM) projection within the supporting base (BS) is the stabilised reference for voluntary action, and is conserved in weightlessness. The purpose of this study was to determine if the CM is stabilised during whole body reaching movements executed in weightlessness. The reaching task was conducted by two cosmonauts aboard the Russian orbital station MIR, during the Franco-Russian mission ALTAIR, 1993. Movements of reflective markers were recorded using a videocamera, successive images being reconstructed by computer every 40ms. The position of the CM, ankle joint torques and shank and thigh angles were computed for each subject pre- in- and post-flight using a 7-link mathematical model. Results showed that both cosmonauts adopted a backward leaning posture prior to reaching movements. Inflight, the CM was displaced throughout values in the horizontal axis three times those of pre-flight measures. In addition, ankle dorsi flexor torques inflight increased to values double those of pre- and post-flight tests. This study concluded that CM displacements do not remain stable during complex postural equilibrium tasks executed in weightlessness. Furthermore, in the absence of gravity, subjects changed their strategy for producing ankle torque during spaceflight from a forward to a backward leaning posture.
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
Object motion computation for the initiation of smooth pursuit eye movements in humans.
Wallace, Julian M; Stone, Leland S; Masson, Guillaume S
2005-04-01
Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.
Burioka, Naoto; Cornélissen, Germaine; Halberg, Franz; Kaplan, Daniel T; Suyama, Hisashi; Sako, Takanori; Shimizu, Eiji
2003-01-01
The breath-to-breath variability of respiratory parameters changes with sleep stage. This study investigates any alteration in the approximate entropy (ApEn) of respiratory movement as a gauge of complexity in respiration, by stage of consciousness, in the light of putative brain interactions. Eight healthy men, who were between the ages of 23 and 29 years, were investigated. The signals of chest wall movement and EEG were recorded from 10:30 PM to 6:00 AM. After analog-to-digital conversion, the ApEn of respiratory movement (3 min) and EEG (20 s) were computed. Surrogate data were tested for nonlinearity in the original time series. The most impressive reduction in the ApEn of respiratory movement was associated with stage IV sleep, when the ApEn of the EEG was also statistically significantly decreased. A statistically significant linear relation is found between the ApEn of both variables. Surrogate data indicated that respiratory movement had nonlinear properties during all stages of consciousness that were investigated. Respiratory movement and EEG signals are more regular during stage IV sleep than during other stages of consciousness. The change in complexity described by the ApEn of respiration depends in part on the ApEn of the EEG, suggesting the involvement of nonlinear dynamic processes in the coordination between brain and lungs.
Computational Exposure Science: An Emerging Discipline to ...
Background: Computational exposure science represents a frontier of environmental science that is emerging and quickly evolving.Objectives: In this commentary, we define this burgeoning discipline, describe a framework for implementation, and review some key ongoing research elements that are advancing the science with respect to exposure to chemicals in consumer products.Discussion: The fundamental elements of computational exposure science include the development of reliable, computationally efficient predictive exposure models; the identification, acquisition, and application of data to support and evaluate these models; and generation of improved methods for extrapolating across chemicals. We describe our efforts in each of these areas and provide examples that demonstrate both progress and potential.Conclusions: Computational exposure science, linked with comparable efforts in toxicology, is ushering in a new era of risk assessment that greatly expands our ability to evaluate chemical safety and sustainability and to protect public health. The National Exposure Research Laboratory’s (NERL’s) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA’s mission to protect human health and the environment. HEASD’s research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA’s strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source
NASA Technical Reports Server (NTRS)
1990-01-01
While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-01-01
Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury. PMID:17391527
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Duann, Jeng-Ren; Chiou, Jin-Chern
2016-01-01
Electroencephalographic (EEG) event-related desynchronization (ERD) induced by movement imagery or by observing biological movements performed by someone else has recently been used extensively for brain-computer interface-based applications, such as applications used in stroke rehabilitation training and motor skill learning. However, the ERD responses induced by the movement imagery and observation might not be as reliable as the ERD responses induced by movement execution. Given that studies on the reliability of the EEG ERD responses induced by these activities are still lacking, here we conducted an EEG experiment with movement imagery, movement observation, and movement execution, performed multiple times each in a pseudorandomized order in the same experimental runs. Then, independent component analysis (ICA) was applied to the EEG data to find the common motor-related EEG source activity shared by the three motor tasks. Finally, conditional EEG ERD responses associated with the three movement conditions were computed and compared. Among the three motor conditions, the EEG ERD responses induced by motor execution revealed the alpha power suppression with highest strengths and longest durations. The ERD responses of the movement imagery and movement observation only partially resembled the ERD pattern of the movement execution condition, with slightly better detectability for the ERD responses associated with the movement imagery and faster ERD responses for movement observation. This may indicate different levels of involvement in the same motor-related brain circuits during different movement conditions. In addition, because the resulting conditional EEG ERD responses from the ICA preprocessing came with minimal contamination from the non-related and/or artifactual noisy components, this result can play a role of the reference for devising a brain-computer interface using the EEG ERD features of movement imagery or observation.
Miksztai-Réthey, Brigitta; Faragó, Kinga Bettina
2015-01-01
We studied an artificial intelligent assisted interaction between a computer and a human with severe speech and physical impairments (SSPI). In order to speed up AAC, we extended a former study of typing performance optimization using a framework that included head movement controlled assistive technology and an onscreen writing device. Quantitative and qualitative data were collected and analysed with mathematical methods, manual interpretation and semi-supervised machine video annotation. As the result of our research, in contrast to the former experiment's conclusions, we found that our participant had at least two different typing strategies. To maximize his communication efficiency, a more complex assistive tool is suggested, which takes the different methods into consideration.
An Intelligent Computer-Based System for Sign Language Tutoring
ERIC Educational Resources Information Center
Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy
2012-01-01
A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…
A biorobotic model of the human larynx.
Manti, M; Cianchetti, M; Nacci, A; Ursino, F; Laschi, C
2015-08-01
This work focuses on a physical model of the human larynx that replicates its main components and functions. The prototype reproduces the multilayer vocal folds and the ab/adduction movements. In particular, the vocal folds prototype is made with soft materials whose mechanical properties have been obtained to be similar to the natural tissue in terms of viscoelasticity. A computational model was used to study fluid-structure interaction between vocal folds and the airflow. This tool allowed us to make a comparison between theoretical and experimental results. Measurements were performed with this prototype in an experimental platform comprising a controlled air flow, pressure sensors and a high-speed camera for measuring vocal fold vibrations. Data included oscillation frequency at the onset pressure and glottal width. Results show that the combination between vocal fold geometry, mechanical properties and dimensions exhibits an oscillation frequency close to that of the human vocal fold. Moreover, computational results show a high correlation with the experimental one.
EOG-sEMG Human Interface for Communication
Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi
2016-01-01
The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as “dual-modality” for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%. PMID:27418924
EOG-sEMG Human Interface for Communication.
Tamura, Hiroki; Yan, Mingmin; Sakurai, Keiko; Tanno, Koichi
2016-01-01
The aim of this study is to present electrooculogram (EOG) and surface electromyogram (sEMG) signals that can be used as a human-computer interface. Establishing an efficient alternative channel for communication without overt speech and hand movements is important for increasing the quality of life for patients suffering from amyotrophic lateral sclerosis, muscular dystrophy, or other illnesses. In this paper, we propose an EOG-sEMG human-computer interface system for communication using both cross-channels and parallel lines channels on the face with the same electrodes. This system could record EOG and sEMG signals as "dual-modality" for pattern recognition simultaneously. Although as much as 4 patterns could be recognized, dealing with the state of the patients, we only choose two classes (left and right motion) of EOG and two classes (left blink and right blink) of sEMG which are easily to be realized for simulation and monitoring task. From the simulation results, our system achieved four-pattern classification with an accuracy of 95.1%.
Development of a parametric kinematic model of the human hand and a novel robotic exoskeleton.
Burton, T M W; Vaidyanathan, R; Burgess, S C; Turton, A J; Melhuish, C
2011-01-01
This paper reports the integration of a kinematic model of the human hand during cylindrical grasping, with specific focus on the accurate mapping of thumb movement during grasping motions, and a novel, multi-degree-of-freedom assistive exoskeleton mechanism based on this model. The model includes thumb maximum hyper-extension for grasping large objects (~> 50 mm). The exoskeleton includes a novel four-bar mechanism designed to reproduce natural thumb opposition and a novel synchro-motion pulley mechanism for coordinated finger motion. A computer aided design environment is used to allow the exoskeleton to be rapidly customized to the hand dimensions of a specific patient. Trials comparing the kinematic model to observed data of hand movement show the model to be capable of mapping thumb and finger joint flexion angles during grasping motions. Simulations show the exoskeleton to be capable of reproducing the complex motion of the thumb to oppose the fingers during cylindrical and pinch grip motions. © 2011 IEEE
Decoding flexion of individual fingers using electrocorticographic signals in humans
NASA Astrophysics Data System (ADS)
Kubánek, J.; Miller, K. J.; Ojemann, J. G.; Wolpaw, J. R.; Schalk, G.
2009-12-01
Brain signals can provide the basis for a non-muscular communication and control system, a brain-computer interface (BCI), for people with motor disabilities. A common approach to creating BCI devices is to decode kinematic parameters of movements using signals recorded by intracortical microelectrodes. Recent studies have shown that kinematic parameters of hand movements can also be accurately decoded from signals recorded by electrodes placed on the surface of the brain (electrocorticography (ECoG)). In the present study, we extend these results by demonstrating that it is also possible to decode the time course of the flexion of individual fingers using ECoG signals in humans, and by showing that these flexion time courses are highly specific to the moving finger. These results provide additional support for the hypothesis that ECoG could be the basis for powerful clinically practical BCI systems, and also indicate that ECoG is useful for studying cortical dynamics related to motor function.
López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław
2017-01-01
The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.
López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław
2017-01-01
The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding. PMID:29312075
NASA Astrophysics Data System (ADS)
Setscheny, Stephan
The interaction between human beings and technology builds a central aspect in human life. The most common form of this human-technology interface is the graphical user interface which is controlled through the mouse and the keyboard. In consequence of continuous miniaturization and the increasing performance of microcontrollers and sensors for the detection of human interactions, developers receive new possibilities for realising innovative interfaces. As far as this movement is concerned, the relevance of computers in the common sense and graphical user interfaces is decreasing. Especially in the area of ubiquitous computing and the interaction through tangible user interfaces a highly impact of this technical evolution can be seen. Apart from this, tangible and experience able interaction offers users the possibility of an interactive and intuitive method for controlling technical objects. The implementation of microcontrollers for control functions and sensors enables the realisation of these experience able interfaces. Besides the theories about tangible user interfaces, the consideration about sensors and the Arduino platform builds a main aspect of this work.
Method and apparatus for predicting the direction of movement in machine vision
NASA Technical Reports Server (NTRS)
Lawton, Teri B. (Inventor)
1992-01-01
A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces.
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
A common optimization principle for motor execution in healthy subjects and parkinsonian patients.
Baraduc, Pierre; Thobois, Stéphane; Gan, Jing; Broussolle, Emmanuel; Desmurget, Michel
2013-01-09
Recent research on Parkinson's disease (PD) has emphasized that parkinsonian movement, although bradykinetic, shares many attributes with healthy behavior. This observation led to the suggestion that bradykinesia in PD could be due to a reduction in motor motivation. This hypothesis can be tested in the framework of optimal control theory, which accounts for many characteristics of healthy human movement while providing a link between the motor behavior and a cost/benefit trade-off. This approach offers the opportunity to interpret movement deficits of PD patients in the light of a computational theory of normal motor control. We studied 14 PD patients with bilateral subthalamic nucleus (STN) stimulation and 16 age-matched healthy controls, and tested whether reaching movements were governed by similar rules in these two groups. A single optimal control model accounted for the reaching movements of healthy subjects and PD patients, whatever the condition of STN stimulation (on or off). The choice of movement speed was explained in all subjects by the existence of a preset dynamic range for the motor signals. This range was idiosyncratic and applied to all movements regardless of their amplitude. In PD patients this dynamic range was abnormally narrow and correlated with bradykinesia. STN stimulation reduced bradykinesia and widened this range in all patients, but did not restore it to a normal value. These results, consistent with the motor motivation hypothesis, suggest that constrained optimization of motor effort is the main determinant of movement planning (choice of speed) and movement production, in both healthy and PD subjects.
A head movement image (HMI)-controlled computer mouse for people with disabilities.
Chen, Yu-Luen; Chen, Weoi-Luen; Kuo, Te-Son; Lai, Jin-Shin
2003-02-04
This study proposes image processing and microprocessor technology for use in developing a head movement image (HMI)-controlled computer mouse system for the spinal cord injured (SCI). The system controls the movement and direction of the mouse cursor by capturing head movement images using a marker installed on the user's headset. In the clinical trial, this new mouse system was compared with an infrared-controlled mouse system on various tasks with nine subjects with SCI. The results were favourable to the new mouse system. The differences between the new mouse system and the infrared-controlled mouse were reaching statistical significance in each of the test situations (p<0.05). The HMI-controlled computer mouse improves the input speed. People with disabilities need only wear the headset and move their heads to freely control the movement of the mouse cursor.
Blend Shape Interpolation and FACS for Realistic Avatar
NASA Astrophysics Data System (ADS)
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
Human recognition based on head-shoulder contour extraction and BP neural network
NASA Astrophysics Data System (ADS)
Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian
2014-11-01
In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.
Computer use changes generalization of movement learning.
Wei, Kunlin; Yan, Xiang; Kong, Gaiqing; Yin, Cong; Zhang, Fan; Wang, Qining; Kording, Konrad Paul
2014-01-06
Over the past few decades, one of the most salient lifestyle changes for us has been the use of computers. For many of us, manual interaction with a computer occupies a large portion of our working time. Through neural plasticity, this extensive movement training should change our representation of movements (e.g., [1-3]), just like search engines affect memory [4]. However, how computer use affects motor learning is largely understudied. Additionally, as virtually all participants in studies of perception and actions are computer users, a legitimate question is whether insights from these studies bear the signature of computer-use experience. We compared non-computer users with age- and education-matched computer users in standard motor learning experiments. We found that people learned equally fast but that non-computer users generalized significantly less across space, a difference negated by two weeks of intensive computer training. Our findings suggest that computer-use experience shaped our basic sensorimotor behaviors, and this influence should be considered whenever computer users are recruited as study participants. Copyright © 2014 Elsevier Ltd. All rights reserved.
Monitoring of atopic dermatitis using leaky coaxial cable.
Dong, Binbin; Ren, Aifeng; Shah, Syed Aziz; Hu, Fangming; Zhao, Nan; Yang, Xiaodong; Haider, Daniyal; Zhang, Zhiya; Zhao, Wei; Abbasi, Qammer Hussain
2017-12-01
In our daily life, inadvertent scratching may increase the severity of skin diseases (such as atopic dermatitis etc.). However, people rarely pay attention to this matter, so the known measurement behaviour of the movement is also very little. Nevertheless, the behaviour and frequency of scratching represent the degree of itching, and the analysis of scratching frequency is helpful to the doctor's clinical dosage. In this Letter, a novel system is proposed to monitor the scratching motion of a sleeping human body at night. The core device of the system is just a leaky coaxial cable (LCX) and a router. Commonly, LCX is used in the blind field or semi-blindfield in wireless communication. The new idea is that the leaky cable is placed on the bed, and then the state information of physical layer of wireless communication channels is acquired to identify the scratching motion and other small body movements in the human sleep process. The results show that it can be used to detect the movement and its duration. Channel state information (CSI) packet is collected by card installed in the computer based on the 802.11n protocol. The characterisation of the scratch motion in the collected CSI is unique, so it can be distinguished from the wireless channel amplitude variation trend.
Monitoring of atopic dermatitis using leaky coaxial cable
Dong, Binbin; Ren, Aifeng; Shah, Syed Aziz; Hu, Fangming; Zhao, Nan; Haider, Daniyal; Zhang, Zhiya; Zhao, Wei; Abbasi, Qammer Hussain
2017-01-01
In our daily life, inadvertent scratching may increase the severity of skin diseases (such as atopic dermatitis etc.). However, people rarely pay attention to this matter, so the known measurement behaviour of the movement is also very little. Nevertheless, the behaviour and frequency of scratching represent the degree of itching, and the analysis of scratching frequency is helpful to the doctor's clinical dosage. In this Letter, a novel system is proposed to monitor the scratching motion of a sleeping human body at night. The core device of the system is just a leaky coaxial cable (LCX) and a router. Commonly, LCX is used in the blind field or semi-blindfield in wireless communication. The new idea is that the leaky cable is placed on the bed, and then the state information of physical layer of wireless communication channels is acquired to identify the scratching motion and other small body movements in the human sleep process. The results show that it can be used to detect the movement and its duration. Channel state information (CSI) packet is collected by card installed in the computer based on the 802.11n protocol. The characterisation of the scratch motion in the collected CSI is unique, so it can be distinguished from the wireless channel amplitude variation trend. PMID:29383259
Birgiolas, Justas; Jernigan, Christopher M.; Gerkin, Richard C.; Smith, Brian H.; Crook, Sharon M.
2017-01-01
Many scientifically and agriculturally important insects use antennae to detect the presence of volatile chemical compounds and extend their proboscis during feeding. The ability to rapidly obtain high-resolution measurements of natural antenna and proboscis movements and assess how they change in response to chemical, developmental, and genetic manipulations can aid the understanding of insect behavior. By extending our previous work on assessing aggregate insect swarm or animal group movements from natural and laboratory videos using the video analysis software SwarmSight, we developed a novel, free, and open-source software module, SwarmSight Appendage Tracking (SwarmSight.org) for frame-by-frame tracking of insect antenna and proboscis positions from conventional web camera videos using conventional computers. The software processes frames about 120 times faster than humans, performs at better than human accuracy, and, using 30 frames per second (fps) videos, can capture antennal dynamics up to 15 Hz. The software was used to track the antennal response of honey bees to two odors and found significant mean antennal retractions away from the odor source about 1 s after odor presentation. We observed antenna position density heat map cluster formation and cluster and mean angle dependence on odor concentration. PMID:29364251
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
Modulation of post‐movement beta rebound by contraction force and rate of force development
Fry, Adam; Mullinger, Karen J.; O'Neill, George C.; Barratt, Eleanor L.; Morris, Peter G.; Bauer, Markus; Folland, Jonathan P.
2016-01-01
Abstract Movement induced modulation of the beta rhythm is one of the most robust neural oscillatory phenomena in the brain. In the preparation and execution phases of movement, a loss in beta amplitude is observed [movement related beta decrease (MRBD)]. This is followed by a rebound above baseline on movement cessation [post movement beta rebound (PMBR)]. These effects have been measured widely, and recent work suggests that they may have significant importance. Specifically, they have potential to form the basis of biomarkers for disease, and have been used in neuroscience applications ranging from brain computer interfaces to markers of neural plasticity. However, despite the robust nature of both MRBD and PMBR, the phenomena themselves are poorly understood. In this study, we characterise MRBD and PMBR during a carefully controlled isometric wrist flexion paradigm, isolating two fundamental movement parameters; force output, and the rate of force development (RFD). Our results show that neither altered force output nor RFD has a significant effect on MRBD. In contrast, PMBR was altered by both parameters. Higher force output results in greater PMBR amplitude, and greater RFD results in a PMBR which is higher in amplitude and shorter in duration. These findings demonstrate that careful control of movement parameters can systematically change PMBR. Further, for temporally protracted movements, the PMBR can be over 7 s in duration. This means accurate control of movement and judicious selection of paradigm parameters are critical in future clinical and basic neuroscientific studies of sensorimotor beta oscillations. Hum Brain Mapp 37:2493–2511, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc PMID:27061243
NASA Astrophysics Data System (ADS)
Kim, Sung-Phil; Simeral, John D.; Hochberg, Leigh R.; Donoghue, John P.; Black, Michael J.
2008-12-01
Computer-mediated connections between human motor cortical neurons and assistive devices promise to improve or restore lost function in people with paralysis. Recently, a pilot clinical study of an intracortical neural interface system demonstrated that a tetraplegic human was able to obtain continuous two-dimensional control of a computer cursor using neural activity recorded from his motor cortex. This control, however, was not sufficiently accurate for reliable use in many common computer control tasks. Here, we studied several central design choices for such a system including the kinematic representation for cursor movement, the decoding method that translates neuronal ensemble spiking activity into a control signal and the cursor control task used during training for optimizing the parameters of the decoding method. In two tetraplegic participants, we found that controlling a cursor's velocity resulted in more accurate closed-loop control than controlling its position directly and that cursor velocity control was achieved more rapidly than position control. Control quality was further improved over conventional linear filters by using a probabilistic method, the Kalman filter, to decode human motor cortical activity. Performance assessment based on standard metrics used for the evaluation of a wide range of pointing devices demonstrated significantly improved cursor control with velocity rather than position decoding. Disclosure. JPD is the Chief Scientific Officer and a director of Cyberkinetics Neurotechnology Systems (CYKN); he holds stock and receives compensation. JDS has been a consultant for CYKN. LRH receives clinical trial support from CYKN.
Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.
Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G
2018-04-18
We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.
An embodiment effect in computer-based learning with animated pedagogical agents.
Mayer, Richard E; DaPra, C Scott
2012-09-01
How do social cues such as gesturing, facial expression, eye gaze, and human-like movement affect multimedia learning with onscreen agents? To help address this question, students were asked to twice view a 4-min narrated presentation on how solar cells work in which the screen showed an animated pedagogical agent standing to the left of 11 successive slides. Across three experiments, learners performed better on a transfer test when a human-voiced agent displayed human-like gestures, facial expression, eye gaze, and body movement than when the agent did not, yielding an embodiment effect. In Experiment 2 the embodiment effect was found when the agent spoke in a human voice but not in a machine voice. In Experiment 3, the embodiment effect was found both when students were told the onscreen agent was consistent with their choice of agent characteristics and when inconsistent. Students who viewed a highly embodied agent also rated the social attributes of the agent more positively than did students who viewed a nongesturing agent. The results are explained by social agency theory, in which social cues in a multimedia message prime a feeling of social partnership in the learner, which leads to deeper cognitive processing during learning, and results in a more meaningful learning outcome as reflected in transfer test performance.
Humans and Robots. Educational Brief.
ERIC Educational Resources Information Center
National Aeronautics and Space Administration, Washington, DC.
This brief discusses human movement and robotic human movement simulators. The activity for students in grades 5-12 provides a history of robotic movement and includes making an End Effector for the robotic arms used on the Space Shuttle and the International Space Station (ISS). (MVL)
Spatial constancy mechanisms in motor control
Medendorp, W. Pieter
2011-01-01
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye–head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals. PMID:21242137
Effects of training pre-movement sensorimotor rhythms on behavioral performance
NASA Astrophysics Data System (ADS)
McFarland, Dennis J.; Sarnacki, William A.; Wolpaw, Jonathan R.
2015-12-01
Objective. Brain-computer interface (BCI) technology might contribute to rehabilitation of motor function. This speculation is based on the premise that modifying the electroencephalographic (EEG) activity will modify behavior, a proposition for which there is limited empirical data. The present study asked whether learned modulation of pre-movement sensorimotor rhythm (SMR) activity can affect motor performance in normal human subjects. Approach. Eight individuals first performed a joystick-based cursor-movement task with variable warning periods. Targets appeared randomly on a video monitor and subjects moved the cursor to the target and pressed a select button within 2 s. SMR features in the pre-movement EEG that correlated with performance speed and accuracy were identified. The subjects then learned to increase or decrease these features to control a two-target BCI task. Following successful BCI training, they were asked to increase or decrease SMR amplitude in order to initiate the joystick task. Main results. After BCI training, pre-movement SMR amplitude was correlated with performance in subjects with initial poor performance: lower amplitude was associated with faster and more accurate movement. The beneficial effect on performance of lower SMR amplitude was greater in subjects with lower initial performance levels. Significance. These results indicate that BCI-based SMR training can affect a standard motor behavior. They provide a rationale for studies that integrate such training into rehabilitation protocols and examine its capacity to enhance restoration of useful motor function.
The Motor System: The Whole and its Parts
Otten, E.
2001-01-01
Our knowledge of components of the human motor system has been growing steadily, but our understanding of its integration into a system is lagging behind. It is suggested that a combination of measurements of forces and movements of the motor system in a functionally meaningful environment in conjunction with computer simulations of the motor system may help us in understanding motor system properties. Neurotrauma can be seen as a natural deviation, with recovery as a slow path to yet another deviant state of the motor system. In that form they may be useful in explaining the close interaction between form and function of the human motor system. PMID:11530882
Venture, Gentiane; Nakamura, Yoshihiko; Yamane, Katsu; Hirashima, Masaya
2007-01-01
Though seldom identified, the human joints dynamics is important in the fields of medical robotics and medical research. We present a general solution to estimate in-vivo and simultaneously the passive dynamics of the human limbs' joints. It is based on the use of the multi-body description of the human body and its kinematics and dynamics computations. The linear passive joint dynamics of the shoulders and the elbows: stiffness, viscosity and friction, is estimated simultaneously using the linear least squares method. Acquisition of movements is achieved with an optical motion capture studio on one examinee during the clinical diagnosis of neuromuscular diseases. Experimental results are given and discussed.
Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1998-10-01
The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.
Generating human-like movements on an anthropomorphic robot using an interior point method
NASA Astrophysics Data System (ADS)
Costa e Silva, E.; Araújo, J. P.; Machado, D.; Costa, M. F.; Erlhagen, W.; Bicho, E.
2013-10-01
In previous work we have presented a model for generating human-like arm and hand movements on an anthropomorphic robot involved in human-robot collaboration tasks. This model was inspired by the Posture-Based Motion-Planning Model of human movements. Numerical results and simulations for reach-to-grasp movements with two different grip types have been presented previously. In this paper we extend our model in order to address the generation of more complex movement sequences which are challenged by scenarios cluttered with obstacles. The numerical results were obtained using the IPOPT solver, which was integrated in our MATLAB simulator of an anthropomorphic robot.
Analysis of Human Swing Movement and Transferring into Robot
NASA Astrophysics Data System (ADS)
Shimodaira, Jun; Amaoka, Yuki; Hamatani, Shinsuke; Takeuchi, Masahiro; Hirai, Hiroaki; Miyazaki, Fumio
Based on Generalized Motor Program, we analyzed the skill of human's table-tennis movement We hypothesized that it can be divided into arm swing and translational movements by upper and lower body movements, respectively. We expressed 3D position of the racket by only one parameter resulted from the analysis using Principal Component Analysis. Body trunk position measurement attested the lower body plays the role of keeping fixed relative-position between the ball and the body trunk at any hitting time. By applying human skills in upper and lower body movements, we could make the robot properly play table-tennis with a human.
A brain-controlled lower-limb exoskeleton for human gait training.
Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua
2017-10-01
Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.
A brain-controlled lower-limb exoskeleton for human gait training
NASA Astrophysics Data System (ADS)
Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua
2017-10-01
Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.
Unifying Speed-Accuracy Trade-Off and Cost-Benefit Trade-Off in Human Reaching Movements.
Peternel, Luka; Sigaud, Olivier; Babič, Jan
2017-01-01
Two basic trade-offs interact while our brain decides how to move our body. First, with the cost-benefit trade-off, the brain trades between the importance of moving faster toward a target that is more rewarding and the increased muscular cost resulting from a faster movement. Second, with the speed-accuracy trade-off, the brain trades between how accurate the movement needs to be and the time it takes to achieve such accuracy. So far, these two trade-offs have been well studied in isolation, despite their obvious interdependence. To overcome this limitation, we propose a new model that is able to simultaneously account for both trade-offs. The model assumes that the central nervous system maximizes the expected utility resulting from the potential reward and the cost over the repetition of many movements, taking into account the probability to miss the target. The resulting model is able to account for both the speed-accuracy and the cost-benefit trade-offs. To validate the proposed hypothesis, we confront the properties of the computational model to data from an experimental study where subjects have to reach for targets by performing arm movements in a horizontal plane. The results qualitatively show that the proposed model successfully accounts for both cost-benefit and speed-accuracy trade-offs.
NASA Astrophysics Data System (ADS)
Komogortsev, Oleg V.; Karpov, Alexey; Holland, Corey D.
2012-06-01
The widespread use of computers throughout modern society introduces the necessity for usable and counterfeit-resistant authentication methods to ensure secure access to personal resources such as bank accounts, e-mail, and social media. Current authentication methods require tedious memorization of lengthy pass phrases, are often prone to shouldersurfing, and may be easily replicated (either by counterfeiting parts of the human body or by guessing an authentication token based on readily available information). This paper describes preliminary work toward a counterfeit-resistant usable eye movement-based (CUE) authentication method. CUE does not require any passwords (improving the memorability aspect of the authentication system), and aims to provide high resistance to spoofing and shoulder-surfing by employing the combined biometric capabilities of two behavioral biometric traits: 1) oculomotor plant characteristics (OPC) which represent the internal, non-visible, anatomical structure of the eye; 2) complex eye movement patterns (CEM) which represent the strategies employed by the brain to guide visual attention. Both OPC and CEM are extracted from the eye movement signal provided by an eye tracking system. Preliminary results indicate that the fusion of OPC and CEM traits is capable of providing a 30% reduction in authentication error when compared to the authentication accuracy of individual traits.
Sensor Control of Robot Arc Welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1983-01-01
The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.
Tal'nov, A N; Cherkassky, V L; Kostyukov, A I
1997-08-01
The electromyograms were recorded in healthy human subjects by surface electrodes from the mm. biceps brachii (caput longum et. brevis), brachioradialis, and triceps brachii (caput longum) during slow transition movements in elbow joint against a weak extending torque. The test movements (flexion transitions between two steady-states) were fulfilled under visual control through combining on a monitor screen a signal from a joint angle sensor with a corresponding command generated by a computer. Movement velocities ranged between 5 and 80 degrees/s, subjects were asked to move forearm without activation of elbow extensors. Surface electromyograms were full-wave rectified, filtered and averaged within sets of 10 identical tests. Amplitudes of dynamic and steady-state components of the electromyograms were determined in dependence on a final value of joint angle, slow and fast movements were compared. An exponential-like increase of dynamic component was observed in electromyograms recorded from m. biceps brachii, the component had been increased with movement velocity and with load increment. In many experiments a statistically significant decrease of static component could be noticed within middle range of joint angles (40-60 degrees) followed by a well expressed increment for larger movements. This pattern of the static component in electromyograms could vary in different experiments even in the same subjects. A steady discharge in m. brachioradialis at ramp phase has usually been recorded only under a notable load. Variable and quite often unpredictable character of the static components of the electromyograms recorded from elbow flexors in the transition movements makes it difficult to use the equilibrium point hypothesis to describe the central processes of movement. It has been assumed that during active muscle shortening the dynamic components in arriving efferent activity should play a predominant role. A simple scheme could be proposed for transition to a steady-state after shortening. Decrease of the efferent inflow can evoke internal lengthening of the contractile elements in muscle and, as a result, hysteresis increase in the muscle contraction efficiency. Effectiveness in maintenance of the steady position seems to also be enhanced due to muscle thixotropy and friction processes in the joint. Hysteresis after-effects in elbow flexors were demonstrated as a difference in steady-state levels of electromyograms with oppositely directed approaches to the same joint position.
Williams, Matthew R.; Kirsch, Robert F.
2013-01-01
We investigated the performance of three user interfaces for restoration of cursor control in individuals with tetraplegia: head orientation, EMG from face and neck muscles, and a standard computer mouse (for comparison). Subjects engaged in a 2D, center-out, Fitts’ Law style task and performance was evaluated using several measures. Overall, head orientation commanded motion resembled mouse commanded cursor motion (smooth, accurate movements to all targets), although with somewhat lower performance. EMG commanded movements exhibited a higher average speed, but other performance measures were lower, particularly for diagonal targets. Compared to head orientation, EMG as a cursor command source was less accurate, was more affected by target direction and was more prone to overshoot the target. In particular, EMG commands for diagonal targets were more sequential, moving first in one direction and then the other rather than moving simultaneous in the two directions. While the relative performance of each user interface differs, each has specific advantages depending on the application. PMID:18990652
NASA Astrophysics Data System (ADS)
Strulik, Konrad L.; Cho, Min H.; Collins, Brian T.; Khan, Noureen; Banovac, Filip; Slack, Rebecca; Cleary, Kevin
2008-03-01
To track respiratory motion during CyberKnife stereotactic radiosurgery in the lung, several (three to five) cylindrical gold fiducials are implanted near the planned target volume (PTV). Since these fiducials remain in the human body after treatment, we hypothesize that tracking fiducial movement over time may correlate with the tumor response to treatment and pulmonary fibrosis, thereby serving as an indicator of treatment success. In this paper, we investigate fiducial migration in 24 patients through examination of computed tomography (CT) volume images at four time points: pre-treatment, three, six, and twelve month post-treatment. We developed a MATLAB based GUI environment to display the images, identify the fiducials, and compute our performance measure. After we semi-automatically segmented and detected fiducial locations in CT images of the same patient over time, we identified them according to their configuration and introduced a relative performance measure (ACD: average center distance) to detect their migration. We found that the migration tended to result in a movement towards the fiducial center of the radiated tissue area (indicating tumor regression) and may potentially be linked to the patient prognosis.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Xu, Ren; Jiang, Ning; Dosen, Strahinja; Lin, Chuang; Mrachacz-Kersting, Natalie; Dremstrup, Kim; Farina, Dario
2016-08-01
In this study, we present a novel multi-class brain-computer interface (BCI) for communication and control. In this system, the information processing is shared by the algorithm (computer) and the user (human). Specifically, an electro-tactile cycle was presented to the user, providing the choice (class) by delivering timely sensory input. The user discriminated these choices by his/her endogenous sensory ability and selected the desired choice with an intuitive motor task. This selection was detected by a fast brain switch based on real-time detection of movement-related cortical potentials from scalp EEG. We demonstrated the feasibility of such a system with a four-class BCI, yielding a true positive rate of ∼ 80% and ∼ 70%, and an information transfer rate of ∼ 7 bits/min and ∼ 5 bits/min, for the movement and imagination selection command, respectively. Furthermore, when the system was extended to eight classes, the throughput of the system was improved, demonstrating the capability of accommodating a large number of classes. Combining the endogenous sensory discrimination with the fast brain switch, the proposed system could be an effective, multi-class, gaze-independent BCI system for communication and control applications.
NASA Astrophysics Data System (ADS)
Yildirim, Serdar; Montanari, Simona; Andersen, Elaine; Narayanan, Shrikanth S.
2003-10-01
Understanding the fine details of children's speech and gestural characteristics helps, among other things, in creating natural computer interfaces. We analyze the acoustic, lexical/non-lexical and spoken/gestural discourse characteristics of young children's speech using audio-video data gathered using a Wizard of Oz technique from 4 to 6 year old children engaged in resolving a series of age-appropriate cognitive challenges. Fundamental and formant frequencies exhibited greater variations between subjects consistent with previous results on read speech [Lee et al., J. Acoust. Soc. Am. 105, 1455-1468 (1999)]. Also, our analysis showed that, in a given bandwidth, phonemic information contained in the speech of young child is significantly less than that of older ones and adults. To enable an integrated analysis, a multi-track annotation board was constructed using the ANVIL tool kit [M. Kipp, Eurospeech 1367-1370 (2001)]. Along with speech transcriptions and acoustic analysis, non-lexical and discourse characteristics, and child's gesture (facial expressions, body movements, hand/head movements) were annotated in a synchronized multilayer system. Initial results showed that younger children rely more on gestures to emphasize their verbal assertions. Younger children use non-lexical speech (e.g., um, huh) associated with frustration and pondering/reflecting more frequently than older ones. Younger children also repair more with humans than with computer.
Eye movement analysis of reading from computer displays, eReaders and printed books.
Zambarbieri, Daniela; Carniglia, Elena
2012-09-01
To compare eye movements during silent reading of three eBooks and a printed book. The three different eReading tools were a desktop PC, iPad tablet and Kindle eReader. Video-oculographic technology was used for recording eye movements. In the case of reading from the computer display the recordings were made by a video camera placed below the computer screen, whereas for reading from the iPad tablet, eReader and printed book the recording system was worn by the subject and had two cameras: one for recording the movement of the eyes and the other for recording the scene in front of the subject. Data analysis provided quantitative information in terms of number of fixations, their duration, and the direction of the movement, the latter to distinguish between fixations and regressions. Mean fixation duration was different only in reading from the computer display, and was similar for the Tablet, eReader and printed book. The percentage of regressions with respect to the total amount of fixations was comparable for eReading tools and the printed book. The analysis of eye movements during reading an eBook from different eReading tools suggests that subjects' reading behaviour is similar to reading from a printed book. © 2012 The College of Optometrists.
An information gathering system for medical image inspection
NASA Astrophysics Data System (ADS)
Lee, Young-Jin; Bajcsy, Peter
2005-04-01
We present an information gathering system for medical image inspection that consists of software tools for capturing computer-centric and human-centric information. Computer-centric information includes (1) static annotations, such as (a) image drawings enclosing any selected area, a set of areas with similar colors, a set of salient points, and (b) textual descriptions associated with either image drawings or links between pairs of image drawings, and (2) dynamic (or temporal) information, such as mouse movements, zoom level changes, image panning and frame selections from an image stack. Human-centric information is represented by video and audio signals that are acquired by computer-mounted cameras and microphones. The short-term goal of the presented system is to facilitate learning of medical novices from medical experts, while the long-term goal is to data mine all information about image inspection for assisting in making diagnoses. In this work, we built basic software functionality for gathering computer-centric and human-centric information of the aforementioned variables. Next, we developed the information playback capabilities of all gathered information for educational purposes. Finally, we prototyped text-based and image template-based search engines to retrieve information from recorded annotations, for example, (a) find all annotations containing the word "blood vessels", or (b) search for similar areas to a selected image area. The information gathering system for medical image inspection reported here has been tested with images from the Histology Atlas database.
Discriminating movements of liquid and gas in the rabbit colon with impedance manometry.
Mohd Rosli, R; Leibbrandt, R E; Wiklendt, L; Costa, M; Wattchow, D A; Spencer, N J; Brookes, S J; Omari, T I; Dinning, P G
2018-05-01
High-resolution impedance manometry is a technique that is well established in esophageal motility studies for relating motor patterns to bolus flow. The use of this technique in the colon has not been established. In isolated segments of rabbit proximal colon, we recorded motor patterns and the movement of liquid or gas boluses with a high-resolution impedance manometry catheter. These detected movements were compared to video recorded changes in gut diameter. Using the characteristic shapes of the admittance (inverse of impedance) and pressure signals associated with gas or liquid flow we developed a computational algorithm for the automated detection of these events. Propagating contractions detected by video were also recorded by manometry and impedance. Neither pressure nor admittance signals alone could distinguish between liquid and gas transit, however the precise relationship between admittance and pressure signals during bolus flow could. Training our computational algorithm upon these characteristic shapes yielded a detection accuracy of 87.7% when compared to gas or liquid bolus events detected by manual analysis. Characterizing the relationship between both admittance and pressure recorded with high-resolution impedance manometry can not only help in detecting luminal transit in real time, but also distinguishes between liquid and gaseous content. This technique holds promise for determining the propulsive nature of human colonic motor patterns. © 2017 John Wiley & Sons Ltd.
Computational estimation of magnetically induced electric fields in a rotating head
NASA Astrophysics Data System (ADS)
Ilvonen, Sami; Laakso, Ilkka
2009-01-01
Change in a magnetic field, or similarly, movement in a strong static magnetic field induces electric fields in human tissues, which could potentially cause harmful effects. In this paper, the fields induced by different rotational movements of a head in a strong homogeneous magnetic field are computed numerically. Average field magnitudes near the retinas and inner ears are studied in order to gain insight into the causes of phosphenes and vertigo-like effects, which are associated with extremely low-frequency (ELF) magnetic fields. The induced electric fields are calculated in four different anatomically realistic head models using an efficient finite-element method (FEM) solver. The results are compared with basic restriction limits by IEEE and ICNIRP. Under rotational movement of the head, with a magnetic flux rate of change of 1 T s-1, the maximum IEEE-averaged electric field and maximum ICNIRP-averaged current density were 337 mV m-1 and 8.84 mA m-2, respectively. The limits by IEEE seem significantly stricter than those by ICNIRP. The results show that a magnetic flux rate of change of 1 T s-1 may induce electric field in the range of 50 mV m-1 near retinas, and possibly even larger values near the inner ears. These results provide information for approximating the threshold electric field values of phosphenes and vertigo-like effects.
NASA Technical Reports Server (NTRS)
Adams, Richard J.
2015-01-01
The patent-pending Glove-Enabled Computer Operations (GECO) design leverages extravehicular activity (EVA) glove design features as platforms for instrumentation and tactile feedback, enabling the gloves to function as human-computer interface devices. Flexible sensors in each finger enable control inputs that can be mapped to any number of functions (e.g., a mouse click, a keyboard strike, or a button press). Tracking of hand motion is interpreted alternatively as movement of a mouse (change in cursor position on a graphical user interface) or a change in hand position on a virtual keyboard. Programmable vibro-tactile actuators aligned with each finger enrich the interface by creating the haptic sensations associated with control inputs, such as recoil of a button press.
de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro
2017-06-01
Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).
Richter, Angelika; Hamann, Melanie; Wissel, Jörg; Volk, Holger A
2015-01-01
Dystonia is defined as a neurological syndrome characterized by involuntary sustained or intermittent muscle contractions causing twisting, often repetitive movements, and postures. Paroxysmal dyskinesias are episodic movement disorders encompassing dystonia, chorea, athetosis, and ballism in conscious individuals. Several decades of research have enhanced the understanding of the etiology of human dystonia and dyskinesias that are associated with dystonia, but the pathophysiology remains largely unknown. The spontaneous occurrence of hereditary dystonia and paroxysmal dyskinesia is well documented in rodents used as animal models in basic dystonia research. Several hyperkinetic movement disorders, described in dogs, horses and cattle, show similarities to these human movement disorders. Although dystonia is regarded as the third most common movement disorder in humans, it is often misdiagnosed because of the heterogeneity of etiology and clinical presentation. Since these conditions are poorly known in veterinary practice, their prevalence may be underestimated in veterinary medicine. In order to attract attention to these movement disorders, i.e., dystonia and paroxysmal dyskinesias associated with dystonia, and to enhance interest in translational research, this review gives a brief overview of the current literature regarding dystonia/paroxysmal dyskinesia in humans and summarizes similar hereditary movement disorders reported in domestic animals.
A Computational Framework for Quantitative Evaluation of Movement during Rehabilitation
NASA Astrophysics Data System (ADS)
Chen, Yinpeng; Duff, Margaret; Lehrer, Nicole; Sundaram, Hari; He, Jiping; Wolf, Steven L.; Rikakis, Thanassis
2011-06-01
This paper presents a novel generalized computational framework for quantitative kinematic evaluation of movement in a rehabilitation clinic setting. The framework integrates clinical knowledge and computational data-driven analysis together in a systematic manner. The framework provides three key benefits to rehabilitation: (a) the resulting continuous normalized measure allows the clinician to monitor movement quality on a fine scale and easily compare impairments across participants, (b) the framework reveals the effect of individual movement components on the composite movement performance helping the clinician decide the training foci, and (c) the evaluation runs in real-time, which allows the clinician to constantly track a patient's progress and make appropriate adaptations to the therapy protocol. The creation of such an evaluation is difficult because of the sparse amount of recorded clinical observations, the high dimensionality of movement and high variations in subject's performance. We address these issues by modeling the evaluation function as linear combination of multiple normalized kinematic attributes y = Σwiφi(xi) and estimating the attribute normalization function φi(ṡ) by integrating distributions of idealized movement and deviated movement. The weights wi are derived from a therapist's pair-wise comparison using a modified RankSVM algorithm. We have applied this framework to evaluate upper limb movement for stroke survivors with excellent results—the evaluation results are highly correlated to the therapist's observations.
Soft Smart Garments for Lower Limb Joint Position Analysis.
Totaro, Massimo; Poliero, Tommaso; Mondini, Alessio; Lucarotti, Chiara; Cairoli, Giovanni; Ortiz, Jesùs; Beccai, Lucia
2017-10-12
Revealing human movement requires lightweight, flexible systems capable of detecting mechanical parameters (like strain and pressure) while being worn comfortably by the user, and not interfering with his/her activity. In this work we address such multifaceted challenge with the development of smart garments for lower limb motion detection, like a textile kneepad and anklet in which soft sensors and readout electronics are embedded for retrieving movement of the specific joint. Stretchable capacitive sensors with a three-electrode configuration are built combining conductive textiles and elastomeric layers, and distributed around knee and ankle. Results show an excellent behavior in the ~30% strain range, hence the correlation between sensors' responses and the optically tracked Euler angles is allowed for basic lower limb movements. Bending during knee flexion/extension is detected, and it is discriminated from any external contact by implementing in real time a low computational algorithm. The smart anklet is designed to address joint motion detection in and off the sagittal plane. Ankle dorsi/plantar flexion, adduction/abduction, and rotation are retrieved. Both knee and ankle smart garments show a high accuracy in movement detection, with a RMSE less than 4° in the worst case.
Tommasino, Paolo; Campolo, Domenico
2017-01-01
A major challenge in robotics and computational neuroscience is relative to the posture/movement problem in presence of kinematic redundancy. We recently addressed this issue using a principled approach which, in conjunction with nonlinear inverse optimization, allowed capturing postural strategies such as Donders' law. In this work, after presenting this general model specifying it as an extension of the Passive Motion Paradigm, we show how, once fitted to capture experimental postural strategies, the model is actually able to also predict movements. More specifically, the passive motion paradigm embeds two main intrinsic components: joint damping and joint stiffness. In previous work we showed that joint stiffness is responsible for static postures and, in this sense, its parameters are regressed to fit to experimental postural strategies. Here, we show how joint damping, in particular its anisotropy, directly affects task-space movements. Rather than using damping parameters to fit a posteriori task-space motions, we make the a priori hypothesis that damping is proportional to stiffness. This remarkably allows a postural-fitted model to also capture dynamic performance such as curvature and hysteresis of task-space trajectories during wrist pointing tasks, confirming and extending previous findings in literature. PMID:29249954
Soft Smart Garments for Lower Limb Joint Position Analysis
Totaro, Massimo; Poliero, Tommaso; Mondini, Alessio; Lucarotti, Chiara; Cairoli, Giovanni; Ortiz, Jesùs; Beccai, Lucia
2017-01-01
Revealing human movement requires lightweight, flexible systems capable of detecting mechanical parameters (like strain and pressure) while being worn comfortably by the user, and not interfering with his/her activity. In this work we address such multifaceted challenge with the development of smart garments for lower limb motion detection, like a textile kneepad and anklet in which soft sensors and readout electronics are embedded for retrieving movement of the specific joint. Stretchable capacitive sensors with a three-electrode configuration are built combining conductive textiles and elastomeric layers, and distributed around knee and ankle. Results show an excellent behavior in the ~30% strain range, hence the correlation between sensors’ responses and the optically tracked Euler angles is allowed for basic lower limb movements. Bending during knee flexion/extension is detected, and it is discriminated from any external contact by implementing in real time a low computational algorithm. The smart anklet is designed to address joint motion detection in and off the sagittal plane. Ankle dorsi/plantar flexion, adduction/abduction, and rotation are retrieved. Both knee and ankle smart garments show a high accuracy in movement detection, with a RMSE less than 4° in the worst case. PMID:29023365
The Human Potential Movement: Forms of Body/Movement/Nonverbal Experiencing.
ERIC Educational Resources Information Center
Caldwell, Stratton F.
A social, humanistic movement has emerged which focuses on the desire of many affluent and advantaged citizens for personal, interpersonal, transpersonal, and organizational growth. It has been termed the "Human Potential Movement." Growth centers, which emphasize the integrated totality of the person, have developed all over the United…
Primate beta oscillations and rhythmic behaviors.
Merchant, Hugo; Bartolo, Ramón
2018-03-01
The study of non-human primates in complex behaviors such as rhythm perception and entrainment is critical to understand the neurophysiological basis of human cognition. Next to reviewing the role of beta oscillations in human beat perception, here we discuss the role of primate putaminal oscillatory activity in the control of rhythmic movements that are guided by a sensory metronome or internally gated. The analysis of the local field potentials of the behaving macaques showed that gamma-oscillations reflect local computations associated with stimulus processing of the metronome, whereas beta-activity involves the entrainment of large putaminal circuits, probably in conjunction with other elements of cortico-basal ganglia-thalamo-cortical circuit, during internally driven rhythmic tapping. Thus, this review emphasizes the need of parametric neurophysiological observations in non-human primates that display a well-controlled behavior during high-level cognitive processes.
NASA Astrophysics Data System (ADS)
Choi, Hoseok; Lee, Jeyeon; Park, Jinsick; Lee, Seho; Ahn, Kyoung-ha; Kim, In Young; Lee, Kyoung-Min; Jang, Dong Pyo
2018-02-01
Objective. In arm movement BCIs (brain-computer interfaces), unimanual research has been much more extensively studied than its bimanual counterpart. However, it is well known that the bimanual brain state is different from the unimanual one. Conventional methodology used in unimanual studies does not take the brain stage into consideration, and therefore appears to be insufficient for decoding bimanual movements. In this paper, we propose the use of a two-staged (effector-then-trajectory) decoder, which combines the classification of movement conditions and uses a hand trajectory predicting algorithm for unimanual and bimanual movements, for application in real-world BCIs. Approach. Two micro-electrode patches (32 channels) were inserted over the dura mater of the left and right hemispheres of two rhesus monkeys, covering the motor related cortex for epidural electrocorticograph (ECoG). Six motion sensors (inertial measurement unit) were used to record the movement signals. The monkeys performed three types of arm movement tasks: left unimanual, right unimanual, bimanual. To decode these movements, we used a two-staged decoder, which combines the effector classifier for four states (left unimanual, right unimanual, bimanual movements, and stationary state) and movement predictor using regression. Main results. Using this approach, we successfully decoded both arm positions using the proposed decoder. The results showed that decoding performance for bimanual movements were improved compared to the conventional method, which does not consider the effector, and the decoding performance was significant and stable over a period of four months. In addition, we also demonstrated the feasibility of epidural ECoG signals, which provided an adequate level of decoding accuracy. Significance. These results provide evidence that brain signals are different depending on the movement conditions or effectors. Thus, the two-staged method could be useful if BCIs are used to generalize for both unimanual and bimanual operations in human applications and in various neuro-prosthetics fields.
Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian
2009-01-01
Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420
Laban movement analysis to classify emotions from motion
NASA Astrophysics Data System (ADS)
Dewan, Swati; Agarwal, Shubham; Singh, Navjyoti
2018-04-01
In this paper, we present the study of Laban Movement Analysis (LMA) to understand basic human emotions from nonverbal human behaviors. While there are a lot of studies on understanding behavioral patterns based on natural language processing and speech processing applications, understanding emotions or behavior from non-verbal human motion is still a very challenging and unexplored field. LMA provides a rich overview of the scope of movement possibilities. These basic elements can be used for generating movement or for describing movement. They provide an inroad to understanding movement and for developing movement efficiency and expressiveness. Each human being combines these movement factors in his/her own unique way and organizes them to create phrases and relationships which reveal personal, artistic, or cultural style. In this work, we build a motion descriptor based on a deep understanding of Laban theory. The proposed descriptor builds up on previous works and encodes experiential features by using temporal windows. We present a more conceptually elaborate formulation of Laban theory and test it in a relatively new domain of behavioral research with applications in human-machine interaction. The recognition of affective human communication may be used to provide developers with a rich source of information for creating systems that are capable of interacting well with humans. We test our algorithm on UCLIC dataset which consists of body motions of 13 non-professional actors portraying angry, fear, happy and sad emotions. We achieve an accuracy of 87.30% on this dataset.
Smooth leader or sharp follower? Playing the mirror game with a robot.
Kashi, Shir; Levy-Tzedek, Shelly
2018-01-01
The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.
Techniques for the Analysis of Human Movement.
ERIC Educational Resources Information Center
Grieve, D. W.; And Others
This book presents the major analytical techniques that may be used in the appraisal of human movement. Chapter 1 is devoted to the photopgraphic analysis of movement with particular emphasis on cine filming. Cine film may be taken with little or no restriction on the performer's range of movement; information on the film is permanent and…
Human Motion Capture Data Tailored Transform Coding.
Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He
2015-07-01
Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Automatic detection of confusion in elderly users of a web-based health instruction video.
Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek
2015-06-01
Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.
Multiagent Work Practice Simulation: Progress and Challenges
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Shaffe, Michael G. (Technical Monitor)
2001-01-01
Modeling and simulating complex human-system interactions requires going beyond formal procedures and information flows to analyze how people interact with each other. Such work practices include conversations, modes of communication, informal assistance, impromptu meetings, workarounds, and so on. To make these social processes visible, we have developed a multiagent simulation tool, called Brahms, for modeling the activities of people belonging to multiple groups, situated in a physical environment (geographic regions, buildings, transport vehicles, etc.) consisting of tools, documents, and a computer system. We are finding many useful applications of Brahms for system requirements analysis, instruction, implementing software agents, and as a workbench for relating cognitive and social theories of human behavior. Many challenges remain for representing work practices, including modeling: memory over multiple days, scheduled activities combining physical objects, groups, and locations on a timeline (such as a Space Shuttle mission), habitat vehicles with trajectories (such as the Shuttle), agent movement in 3D space (e.g., inside the International Space Station), agent posture and line of sight, coupled movements (such as carrying objects), and learning (mimicry, forming habits, detecting repetition, etc.).
Personality and emotion-based high-level control of affective story characters.
Su, Wen-Poh; Pham, Binh; Wardhani, Aster
2007-01-01
Human emotional behavior, personality, and body language are the essential elements in the recognition of a believable synthetic story character. This paper presents an approach using story scripts and action descriptions in a form similar to the content description of storyboards to predict specific personality and emotional states. By adopting the Abridged Big Five Circumplex (AB5C) Model of personality from the study of psychology as a basis for a computational model, we construct a hierarchical fuzzy rule-based system to facilitate the personality and emotion control of the body language of a dynamic story character. The story character can consistently perform specific postures and gestures based on his/her personality type. Story designers can devise a story context in the form of our story interface which predictably motivates personality and emotion values to drive the appropriate movements of the story characters. Our system takes advantage of relevant knowledge described by psychologists and researchers of storytelling, nonverbal communication, and human movement. Our ultimate goal is to facilitate the high-level control of a synthetic character.
Multiagent Work Practice Simulation: Progress and Challenges
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten
2002-01-01
Modeling and simulating complex human-system interactions requires going beyond formal procedures and information flows to analyze how people interact with each other. Such work practices include conversations, modes of communication, informal assistance, impromptu meetings, workarounds, and so on. To make these social processes visible, we have developed a multiagent simulation tool, called Brahms, for modeling the activities of people belonging to multiple groups, situated in a physical environment (geographic regions, buildings, transport vehicles, etc.) consisting of tools, documents, and computer systems. We are finding many useful applications of Brahms for system requirements analysis, instruction, implementing software agents, and as a workbench for relating cognitive and social theories of human behavior. Many challenges remain for representing work practices, including modeling: memory over multiple days, scheduled activities combining physical objects, groups, and locations on a timeline (such as a Space Shuttle mission), habitat vehicles with trajectories (such as the Shuttle), agent movement in 3d space (e.g., inside the International Space Station), agent posture and line of sight, coupled movements (such as carrying objects), and learning (mimicry, forming habits, detecting repetition, etc.).
Biotensegrity and myofascial chains: A global approach to an integrated kinetic chain.
Dischiavi, S L; Wright, A A; Hegedus, E J; Bleakley, C M
2018-01-01
Human movement is a complex orchestration of events involving many different body systems. Understanding how these systems interact during musculoskeletal movements can directly inform a variety of research fields including: injury etiology, injury prevention and therapeutic exercise prescription. Traditionally scientists have examined human movement through a reductionist lens whereby movements are broken down and observed in isolation. The process of reductionism fails to capture the interconnected complexities and the dynamic interactions found within complex systems such as human movement. An emerging idea is that human movement may be better understood using a holistic philosophy. In this regard, the properties of a given system cannot be determined or explained by its components alone, rather, it is the complexity of the system as a whole, that determines how the individual component parts behave. This paper hypothesizes that human movement can be better understood through holism; and provides available observational evidence in musculoskeletal science, which help to frame human movement as a globally interconnected complex system. Central to this, is biotensegrity, a concept where the bones of the skeletal system are postulated to be held together by the resting muscle tone of numerous viscoelastic muscular chains in a tension dependent manner. The design of a biotensegrity system suggests that when human movement occurs, the entire musculoskeletal system constantly adjusts during this movement causing global patterns to occur. This idea further supported by recent anatomical evidence suggesting that the muscles of the human body can no longer by viewed as independent anatomical structures that simply connect one bone to another bone. Rather, the body consists of numerous muscles connected in series, and end to end, which span the entire musculoskeletal system, creating long polyarticular viscoelastic myofascial muscle chains. Although theoretical, the concept of the human body being connected by these muscular chains, within a biotensegrity design, could be a potential underpinning theory for analyzing human movement in a more holistic manner. Indeed, preliminary research has now used the concept of myofascial pathways to enhance musculoskeletal examination, and provides a vivid example of how range of motion at a peripheral joint, is dependent upon the positioning of the entire body, offering supportive evidence that the body's kinetic chain is globally interconnected. Theoretical models that introduce a complex systems approach should be welcomed by the movement science field in an attempt to help explain clinical questions that have been resistant to a linear model. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.
Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L
2016-10-01
Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.
Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer
Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan
2012-01-01
Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853
Richter, Angelika; Hamann, Melanie; Wissel, Jörg; Volk, Holger A.
2015-01-01
Dystonia is defined as a neurological syndrome characterized by involuntary sustained or intermittent muscle contractions causing twisting, often repetitive movements, and postures. Paroxysmal dyskinesias are episodic movement disorders encompassing dystonia, chorea, athetosis, and ballism in conscious individuals. Several decades of research have enhanced the understanding of the etiology of human dystonia and dyskinesias that are associated with dystonia, but the pathophysiology remains largely unknown. The spontaneous occurrence of hereditary dystonia and paroxysmal dyskinesia is well documented in rodents used as animal models in basic dystonia research. Several hyperkinetic movement disorders, described in dogs, horses and cattle, show similarities to these human movement disorders. Although dystonia is regarded as the third most common movement disorder in humans, it is often misdiagnosed because of the heterogeneity of etiology and clinical presentation. Since these conditions are poorly known in veterinary practice, their prevalence may be underestimated in veterinary medicine. In order to attract attention to these movement disorders, i.e., dystonia and paroxysmal dyskinesias associated with dystonia, and to enhance interest in translational research, this review gives a brief overview of the current literature regarding dystonia/paroxysmal dyskinesia in humans and summarizes similar hereditary movement disorders reported in domestic animals. PMID:26664992
Understanding Movement: A Sociocultural Approach to Exploring Moving Humans
ERIC Educational Resources Information Center
Larsson, Hakan; Quennerstedt, Mikael
2012-01-01
The purpose of the article is to outline a sociocultural way of exploring human movement. Our ambition is to develop an analytical framework where moving humans are explored in terms of what it means to move as movements are performed by somebody, for a certain purpose, and in a certain situation. We find this approach in poststructural…
Toward a Mobile Agent Relay Network
2010-03-01
in the study of particle movement. In computer science, flocking movement has been adapted for use in the collective, cooperative movement of...MARN). For our approach, we utilize a mod- ified flocking behavior to generate cooperative movement that utilizes the agent’s re- lay capability. We...Summary Our testing focuses on measuring effective cooperative movement and robustness against malicious agents. The movement testing demonstrated that a
Using computer-based video analysis in the study of fidgety movements.
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild
2009-09-01
Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Relationship between speed and EEG activity during imagined and executed hand movements
NASA Astrophysics Data System (ADS)
Yuan, Han; Perdoni, Christopher; He, Bin
2010-04-01
The relationship between primary motor cortex and movement kinematics has been shown in nonhuman primate studies of hand reaching or drawing tasks. Studies have demonstrated that the neural activities accompanying or immediately preceding the movement encode the direction, speed and other information. Here we investigated the relationship between the kinematics of imagined and actual hand movement, i.e. the clenching speed, and the EEG activity in ten human subjects. Study participants were asked to perform and imagine clenching of the left hand and right hand at various speeds. The EEG activity in the alpha (8-12 Hz) and beta (18-28 Hz) frequency bands were found to be linearly correlated with the speed of imagery clenching. Similar parametric modulation was also found during the execution of hand movements. A single equation relating the EEG activity to the speed and the hand (left versus right) was developed. This equation, which contained a linear independent combination of the two parameters, described the time-varying neural activity during the tasks. Based on the model, a regression approach was developed to decode the two parameters from the multiple-channel EEG signals. We demonstrated the continuous decoding of dynamic hand and speed information of the imagined clenching. In particular, the time-varying clenching speed was reconstructed in a bell-shaped profile. Our findings suggest an application to providing continuous and complex control of noninvasive brain-computer interface for movement-impaired paralytics.
Bertomeu-Motos, Arturo; Blanco, Andrea; Badesa, Francisco J; Barios, Juan A; Zollo, Loredana; Garcia-Aracil, Nicolas
2018-02-20
End-effector robots are commonly used in robot-assisted neuro-rehabilitation therapies for upper limbs where the patient's hand can be easily attached to a splint. Nevertheless, they are not able to estimate and control the kinematic configuration of the upper limb during the therapy. However, the Range of Motion (ROM) together with the clinical assessment scales offers a comprehensive assessment to the therapist. Our aim is to present a robust and stable kinematic reconstruction algorithm to accurately measure the upper limb joints using only an accelerometer placed onto the upper arm. The proposed algorithm is based on the inverse of the augmented Jaciobian as the algorithm (Papaleo, et al., Med Biol Eng Comput 53(9):815-28, 2015). However, the estimation of the elbow joint location is performed through the computation of the rotation measured by the accelerometer during the arm movement, making the algorithm more robust against shoulder movements. Furthermore, we present a method to compute the initial configuration of the upper limb necessary to start the integration method, a protocol to manually measure the upper arm and forearm lengths, and a shoulder position estimation. An optoelectronic system was used to test the accuracy of the proposed algorithm whilst healthy subjects were performing upper limb movements holding the end effector of the seven Degrees of Freedom (DoF) robot. In addition, the previous and the proposed algorithms were studied during a neuro-rehabilitation therapy assisted by the 'PUPArm' planar robot with three post-stroke patients. The proposed algorithm reports a Root Mean Square Error (RMSE) of 2.13cm in the elbow joint location and 1.89cm in the wrist joint location with high correlation. These errors lead to a RMSE about 3.5 degrees (mean of the seven joints) with high correlation in all the joints with respect to the real upper limb acquired through the optoelectronic system. Then, the estimation of the upper limb joints through both algorithms reveal an instability on the previous when shoulder movement appear due to the inevitable trunk compensation in post-stroke patients. The proposed algorithm is able to accurately estimate the human upper limb joints during a neuro-rehabilitation therapy assisted by end-effector robots. In addition, the implemented protocol can be followed in a clinical environment without optoelectronic systems using only one accelerometer attached in the upper arm. Thus, the ROM can be perfectly determined and could become an objective assessment parameter for a comprehensive assessment.
Development of a Computer Writing System Based on EOG
López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian
2017-01-01
The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders. PMID:28672863
Development of a Computer Writing System Based on EOG.
López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian
2017-06-26
The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.
A Collaborative Brain-Computer Interface for Improving Human Performance
Wang, Yijun; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG) based brain-computer interfaces (BCI) have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1) Event-related potentials (ERP) averaging, (2) Feature concatenating, and (3) Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right) was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100–250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC), which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior. PMID:21655253
NASA Astrophysics Data System (ADS)
Bai, Ou; Lin, Peter; Vorbach, Sherry; Floeter, Mary Kay; Hattori, Noriaki; Hallett, Mark
2008-03-01
To explore the reliability of a high performance brain-computer interface (BCI) using non-invasive EEG signals associated with human natural motor behavior does not require extensive training. We propose a new BCI method, where users perform either sustaining or stopping a motor task with time locking to a predefined time window. Nine healthy volunteers, one stroke survivor with right-sided hemiparesis and one patient with amyotrophic lateral sclerosis (ALS) participated in this study. Subjects did not receive BCI training before participating in this study. We investigated tasks of both physical movement and motor imagery. The surface Laplacian derivation was used for enhancing EEG spatial resolution. A model-free threshold setting method was used for the classification of motor intentions. The performance of the proposed BCI was validated by an online sequential binary-cursor-control game for two-dimensional cursor movement. Event-related desynchronization and synchronization were observed when subjects sustained or stopped either motor execution or motor imagery. Feature analysis showed that EEG beta band activity over sensorimotor area provided the largest discrimination. With simple model-free classification of beta band EEG activity from a single electrode (with surface Laplacian derivation), the online classifications of the EEG activity with motor execution/motor imagery were: >90%/~80% for six healthy volunteers, >80%/~80% for the stroke patient and ~90%/~80% for the ALS patient. The EEG activities of the other three healthy volunteers were not classifiable. The sensorimotor beta rhythm of EEG associated with human natural motor behavior can be used for a reliable and high performance BCI for both healthy subjects and patients with neurological disorders. Significance: The proposed new non-invasive BCI method highlights a practical BCI for clinical applications, where the user does not require extensive training.
Comparison of ANN and SVM for classification of eye movements in EOG signals
NASA Astrophysics Data System (ADS)
Qi, Lim Jia; Alias, Norma
2018-03-01
Nowadays, electrooculogram is regarded as one of the most important biomedical signal in measuring and analyzing eye movement patterns. Thus, it is helpful in designing EOG-based Human Computer Interface (HCI). In this research, electrooculography (EOG) data was obtained from five volunteers. The (EOG) data was then preprocessed before feature extraction methods were employed to further reduce the dimensionality of data. Three feature extraction approaches were put forward, namely statistical parameters, autoregressive (AR) coefficients using Burg method, and power spectral density (PSD) using Yule-Walker method. These features would then become input to both artificial neural network (ANN) and support vector machine (SVM). The performance of the combination of different feature extraction methods and classifiers was presented and analyzed. It was found that statistical parameters + SVM achieved the highest classification accuracy of 69.75%.
Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo
NASA Astrophysics Data System (ADS)
Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu
2005-04-01
We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.
Smooth leader or sharp follower? Playing the mirror game with a robot
Kashi, Shir; Levy-Tzedek, Shelly
2017-01-01
Background: The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. Objective: We set out to test people’s preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Methods: Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. Results: The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. Conclusion: The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions. PMID:29036853
Gloved Human-Machine Interface
NASA Technical Reports Server (NTRS)
Adams, Richard (Inventor); Hannaford, Blake (Inventor); Olowin, Aaron (Inventor)
2015-01-01
Certain exemplary embodiments can provide a system, machine, device, manufacture, circuit, composition of matter, and/or user interface adapted for and/or resulting from, and/or a method and/or machine-readable medium comprising machine-implementable instructions for, activities that can comprise and/or relate to: tracking movement of a gloved hand of a human; interpreting a gloved finger movement of the human; and/or in response to interpreting the gloved finger movement, providing feedback to the human.
Feasible Muscle Activation Ranges Based on Inverse Dynamics Analyses of Human Walking
Simpson, Cole S.; Sohn, M. Hongchul; Allen, Jessica L.; Ting, Lena H.
2015-01-01
Although it is possible to produce the same movement using an infinite number of different muscle activation patterns owing to musculoskeletal redundancy, the degree to which observed variations in muscle activity can deviate from optimal solutions computed from biomechanical models is not known. Here, we examined the range of biomechanically permitted activation levels in individual muscles during human walking using a detailed musculoskeletal model and experimentally-measured kinetics and kinematics. Feasible muscle activation ranges define the minimum and maximum possible level of each muscle’s activation that satisfy inverse dynamics joint torques assuming that all other muscles can vary their activation as needed. During walking, 73% of the muscles had feasible muscle activation ranges that were greater than 95% of the total muscle activation range over more than 95% of the gait cycle, indicating that, individually, most muscles could be fully active or fully inactive while still satisfying inverse dynamics joint torques. Moreover, the shapes of the feasible muscle activation ranges did not resemble previously-reported muscle activation patterns nor optimal solutions, i.e. static optimization and computed muscle control, that are based on the same biomechanical constraints. Our results demonstrate that joint torque requirements from standard inverse dynamics calculations are insufficient to define the activation of individual muscles during walking in healthy individuals. Identifying feasible muscle activation ranges may be an effective way to evaluate the impact of additional biomechanical and/or neural constraints on possible versus actual muscle activity in both normal and impaired movements. PMID:26300401
In good company? Perception of movement synchrony of a non-anthropomorphic robot.
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot's likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants' perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot.
In Good Company? Perception of Movement Synchrony of a Non-Anthropomorphic Robot
Lehmann, Hagen; Saez-Pons, Joan; Syrdal, Dag Sverre; Dautenhahn, Kerstin
2015-01-01
Recent technological developments like cheap sensors and the decreasing costs of computational power have brought the possibility of robotic home companions within reach. In order to be accepted it is vital for these robots to be able to participate meaningfully in social interactions with their users and to make them feel comfortable during these interactions. In this study we investigated how people respond to a situation where a companion robot is watching its user. Specifically, we tested the effect of robotic behaviours that are synchronised with the actions of a human. We evaluated the effects of these behaviours on the robot’s likeability and perceived intelligence using an online video survey. The robot used was Care-O-bot3, a non-anthropomorphic robot with a limited range of expressive motions. We found that even minimal, positively synchronised movements during an object-oriented task were interpreted by participants as engagement and created a positive disposition towards the robot. However, even negatively synchronised movements of the robot led to more positive perceptions of the robot, as compared to a robot that does not move at all. The results emphasise a) the powerful role that robot movements in general can have on participants’ perception of the robot, and b) that synchronisation of body movements can be a powerful means to enhance the positive attitude towards a non-anthropomorphic robot. PMID:26001025
Martin, Katherine B; Hammal, Zakia; Ren, Gang; Cohn, Jeffrey F; Cassell, Justine; Ogihara, Mitsunori; Britton, Jennifer C; Gutierrez, Anibal; Messinger, Daniel S
2018-01-01
Deficits in motor movement in children with autism spectrum disorder (ASD) have typically been characterized qualitatively by human observers. Although clinicians have noted the importance of atypical head positioning (e.g. social peering and repetitive head banging) when diagnosing children with ASD, a quantitative understanding of head movement in ASD is lacking. Here, we conduct a quantitative comparison of head movement dynamics in children with and without ASD using automated, person-independent computer-vision based head tracking (Zface). Because children with ASD often exhibit preferential attention to nonsocial versus social stimuli, we investigated whether children with and without ASD differed in their head movement dynamics depending on stimulus sociality. The current study examined differences in head movement dynamics in children with ( n = 21) and without ASD ( n = 21). Children were video-recorded while watching a 16-min video of social and nonsocial stimuli. Three dimensions of rigid head movement-pitch (head nods), yaw (head turns), and roll (lateral head inclinations)-were tracked using Zface. The root mean square of pitch, yaw, and roll was calculated to index the magnitude of head angular displacement (quantity of head movement) and angular velocity (speed). Compared with children without ASD, children with ASD exhibited greater yaw displacement, indicating greater head turning, and greater velocity of yaw and roll, indicating faster head turning and inclination. Follow-up analyses indicated that differences in head movement dynamics were specific to the social rather than the nonsocial stimulus condition. Head movement dynamics (displacement and velocity) were greater in children with ASD than in children without ASD, providing a quantitative foundation for previous clinical reports. Head movement differences were evident in lateral (yaw and roll) but not vertical (pitch) movement and were specific to a social rather than nonsocial condition. When presented with social stimuli, children with ASD had higher levels of head movement and moved their heads more quickly than children without ASD. Children with ASD may use head movement to modulate their perception of social scenes.
Enabling Disabled Persons to Gain Access to Digital Media
NASA Technical Reports Server (NTRS)
Beach, Glenn; OGrady, Ryan
2011-01-01
A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.
Is Law a Humanity: (Or Is It More like Engineering)?
ERIC Educational Resources Information Center
Howarth, David
2004-01-01
Law often appears to be in a limbo between the Social Sciences and the Humanities. Movements within legal scholarship itself, the law and economics movement and the law and literature movement, represent efforts to portray law as a social science or as a humanity. But if one looks at what lawyers do, one finds that law is more like…
Perge, János A; Zhang, Shaomin; Malik, Wasim Q; Homer, Mark L; Cash, Sydney; Friehs, Gerhard; Eskandar, Emad N; Donoghue, John P; Hochberg, Leigh R
2014-08-01
Action potentials and local field potentials (LFPs) recorded in primary motor cortex contain information about the direction of movement. LFPs are assumed to be more robust to signal instabilities than action potentials, which makes LFPs, along with action potentials, a promising signal source for brain-computer interface applications. Still, relatively little research has directly compared the utility of LFPs to action potentials in decoding movement direction in human motor cortex. We conducted intracortical multi-electrode recordings in motor cortex of two persons (T2 and [S3]) as they performed a motor imagery task. We then compared the offline decoding performance of LFPs and spiking extracted from the same data recorded across a one-year period in each participant. We obtained offline prediction accuracy of movement direction and endpoint velocity in multiple LFP bands, with the best performance in the highest (200-400 Hz) LFP frequency band, presumably also containing low-pass filtered action potentials. Cross-frequency correlations of preferred directions and directional modulation index showed high similarity of directional information between action potential firing rates (spiking) and high frequency LFPs (70-400 Hz), and increasing disparity with lower frequency bands (0-7, 10-40 and 50-65 Hz). Spikes predicted the direction of intended movement more accurately than any individual LFP band, however combined decoding of all LFPs was statistically indistinguishable from spike-based performance. As the quality of spiking signals (i.e. signal amplitude) and the number of significantly modulated spiking units decreased, the offline decoding performance decreased 3.6[5.65]%/month (for T2 and [S3] respectively). The decrease in the number of significantly modulated LFP signals and their decoding accuracy followed a similar trend (2.4[2.85]%/month, ANCOVA, p = 0.27[0.03]). Field potentials provided comparable offline decoding performance to unsorted spikes. Thus, LFPs may provide useful external device control using current human intracortical recording technology. ( NCT00912041.).
The Ecology of Human Mobility.
Meekan, Mark G; Duarte, Carlos M; Fernández-Gracia, Juan; Thums, Michele; Sequeira, Ana M M; Harcourt, Rob; Eguíluz, Víctor M
2017-03-01
Mobile phones and other geolocated devices have produced unprecedented volumes of data on human movement. Analysis of pooled individual human trajectories using big data approaches has revealed a wealth of emergent features that have ecological parallels in animals across a diverse array of phenomena including commuting, epidemics, the spread of innovations and culture, and collective behaviour. Movement ecology, which explores how animals cope with and optimize variability in resources, has the potential to provide a theoretical framework to aid an understanding of human mobility and its impacts on ecosystems. In turn, big data on human movement can be explored in the context of animal movement ecology to provide solutions for urgent conservation problems and management challenges. Copyright © 2016 Elsevier Ltd. All rights reserved.
Eye-head coordination during free exploration in human and cat.
Einhäuser, Wolfgang; Moeller, Gudrun U; Schumann, Frank; Conradt, Jörg; Vockeroth, Johannes; Bartl, Klaus; Schneider, Erich; König, Peter
2009-05-01
Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.
Psychophysics and Neuronal Bases of Sound Localization in Humans
Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.
2013-01-01
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698
Accelerometry-based classification of human activities using Markov modeling.
Mannini, Andrea; Sabatini, Angelo Maria
2011-01-01
Accelerometers are a popular choice as body-motion sensors: the reason is partly in their capability of extracting information that is useful for automatically inferring the physical activity in which the human subject is involved, beside their role in feeding biomechanical parameters estimators. Automatic classification of human physical activities is highly attractive for pervasive computing systems, whereas contextual awareness may ease the human-machine interaction, and in biomedicine, whereas wearable sensor systems are proposed for long-term monitoring. This paper is concerned with the machine learning algorithms needed to perform the classification task. Hidden Markov Model (HMM) classifiers are studied by contrasting them with Gaussian Mixture Model (GMM) classifiers. HMMs incorporate the statistical information available on movement dynamics into the classification process, without discarding the time history of previous outcomes as GMMs do. An example of the benefits of the obtained statistical leverage is illustrated and discussed by analyzing two datasets of accelerometer time series.
Moving in the Anthropocene: Global reductions in terrestrial mammalian movements
Tucker, Marlee A.; Böhning-Gaese, Katrin; Fagan, William F.; Fryxell, John; Van Moorter, Bram; Alberts, Susan C; Ali, Abdullahi H.; Allen, Andrew M.; Attias, Nina; Avgar, Tal; Bartlam-Brooks, Hattie; Bayarbaatar, Buuveibaatar; Belant, Jerrold L.; Bertassoni, Alessandra; Beyer, Dean; Bidner, Laura; M. van Beest, Floris; Blake, Stephen; Blaum, Niels; Bracis, Chloe; Brown, Danielle; Nico de Bruyn, P. J.; Cagnacci, Francesca; Calabrese, J.M.; Camilo-Alves, Constança; Chamaillé-Jammes, Simon; Chiaradia, Andre; Davidson, Sarah C.; Dennis, Todd; DeStefano, Stephen; Diefenbach, Duane R.; Douglas-Hamilton, Iain; Fennessy, Julian; Fichtel, Claudia; Fiedler, Wolfgang; Fischer, Christina; Fischhoff, Ilya; Fleming, Christen H.; Ford, Adam T.; Fritz, Susanne A.; Gehr, Benedikt; Goheen, Jacob R.; Gurarie, Eliezer; Hebblewhite, Mark; Heurich, Marco; Mark Hewison, A.J.; Hof, Christian; Hurme, Edward; Isbell, Lynne A.; Janssen, René; Jeltsch, Florian; Kaczensky, Petra; Kane, Adam; Kappeler, Peter M.; Kauffman, Matthew J.; Kays, Roland; Kimuyu, Duncan; Koch, Flavia; Kranstauber, Bart; LaPoint, Scott; Leimgruber, Peter; Linnell, John D. C.; López-López, Pascual; Markham, A. Catherine; Mattisson, Jenny; Medici, Emilia Patricia; Mellone, Ugo; Merrill, E.; de Miranda Mourão, Guilherme; Morato, Ronaldo G.; Morellet, Nicolas; Morrison, Thomas A.; Díaz-Muñoz, Samuel L.; Mysterud, Atle; Nandintsetseg, Dejid; Nathan, Ran; Niamir, Aidin; Odden, John; O'Hara, Robert B.; Oliveira-Santos, Luiz G. R.; Olson, Kirk A.; Patterson, Bruce D.; Cunha de Paula, Rogerio; Pedrotti, Luca; Reineking, Björn; Rimmler, Martin; Rogers, T.L.; Rolandsen, Christer Moe; Rosenberry, Christopher S.; Rubenstein, Daniel I.; Safi, Kamran; Saïd, Sonia; Sapir, Nir; Sawyer, Hall; Schmidt, Niels Martin; Selva, Nuria; Sergiel, Agnieszka; Shiilegdamba, Enkhtuvshin; Silva, João Paulo; Singh, N.; Solberg, Erling J.; Spiegel, Orr; Strand, Olav; Sundaresan, S.R.; Ullmann, Wiebke; Voigt, Ulrich; Wall, J.; Wattles, David W.; Wikelski, Martin; Wilmers, Christopher C.; Wilson, Jon W.; Wittemyer, George; Zięba, Filip; Zwijacz-Kozica, Tomasz; Mueller, Thomas
2018-01-01
Animal movement is fundamental for ecosystem functioning and species survival, yet the effects of the anthropogenic footprint on animal movements have not been estimated across species. Using a unique GPS-tracking database of 803 individuals across 57 species, we found that movements of mammals in areas with a comparatively high human footprint were on average one-half to one-third the extent of their movements in areas with a low human footprint. We attribute this reduction to behavioral changes of individual animals and to the exclusion of species with long-range movements from areas with higher human impact. Global loss of vagility alters a key ecological trait of animals that affects not only population persistence but also ecosystem processes such as predator-prey interactions, nutrient cycling, and disease transmission.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
House-to-house human movement drives dengue virus transmission
Stoddard, Steven T.; Forshey, Brett M.; Morrison, Amy C.; Paz-Soldan, Valerie A.; Vazquez-Prokopec, Gonzalo M.; Astete, Helvio; Reiner, Robert C.; Vilcarromero, Stalin; Elder, John P.; Halsey, Eric S.; Kochel, Tadeusz J.; Kitron, Uriel; Scott, Thomas W.
2013-01-01
Dengue is a mosquito-borne disease of growing global health importance. Prevention efforts focus on mosquito control, with limited success. New insights into the spatiotemporal drivers of dengue dynamics are needed to design improved disease-prevention strategies. Given the restricted range of movement of the primary mosquito vector, Aedes aegypti, local human movements may be an important driver of dengue virus (DENV) amplification and spread. Using contact-site cluster investigations in a case-control design, we demonstrate that, at an individual level, risk for human infection is defined by visits to places where contact with infected mosquitoes is likely, independent of distance from the home. Our data indicate that house-to-house human movements underlie spatial patterns of DENV incidence, causing marked heterogeneity in transmission rates. At a collective level, transmission appears to be shaped by social connections because routine movements among the same places, such as the homes of family and friends, are often similar for the infected individual and their contacts. Thus, routine, house-to-house human movements do play a key role in spread of this vector-borne pathogen at fine spatial scales. This finding has important implications for dengue prevention, challenging the appropriateness of current approaches to vector control. We argue that reexamination of existing paradigms regarding the spatiotemporal dynamics of DENV and other vector-borne pathogens, especially the importance of human movement, will lead to improvements in disease prevention. PMID:23277539
Towards a user-friendly brain-computer interface: initial tests in ALS and PLS patients.
Bai, Ou; Lin, Peter; Huang, Dandan; Fei, Ding-Yu; Floeter, Mary Kay
2010-08-01
Patients usually require long-term training for effective EEG-based brain-computer interface (BCI) control due to fatigue caused by the demands for focused attention during prolonged BCI operation. We intended to develop a user-friendly BCI requiring minimal training and less mental load. Testing of BCI performance was investigated in three patients with amyotrophic lateral sclerosis (ALS) and three patients with primary lateral sclerosis (PLS), who had no previous BCI experience. All patients performed binary control of cursor movement. One ALS patient and one PLS patient performed four-directional cursor control in a two-dimensional domain under a BCI paradigm associated with human natural motor behavior using motor execution and motor imagery. Subjects practiced for 5-10min and then participated in a multi-session study of either binary control or four-directional control including online BCI game over 1.5-2h in a single visit. Event-related desynchronization and event-related synchronization in the beta band were observed in all patients during the production of voluntary movement either by motor execution or motor imagery. The online binary control of cursor movement was achieved with an average accuracy about 82.1+/-8.2% with motor execution and about 80% with motor imagery, whereas offline accuracy was achieved with 91.4+/-3.4% with motor execution and 83.3+/-8.9% with motor imagery after optimization. In addition, four-directional cursor control was achieved with an accuracy of 50-60% with motor execution and motor imagery. Patients with ALS or PLS may achieve BCI control without extended training, and fatigue might be reduced during operation of a BCI associated with human natural motor behavior. The development of a user-friendly BCI will promote practical BCI applications in paralyzed patients. Copyright 2010 International Federation of Clinical Neurophysiology. All rights reserved.
Wang, Qi; Taylor, John E.
2016-01-01
Natural disasters pose serious threats to large urban areas, therefore understanding and predicting human movements is critical for evaluating a population’s vulnerability and resilience and developing plans for disaster evacuation, response and relief. However, only limited research has been conducted into the effect of natural disasters on human mobility. This study examines how natural disasters influence human mobility patterns in urban populations using individuals’ movement data collected from Twitter. We selected fifteen destructive cases across five types of natural disaster and analyzed the human movement data before, during, and after each event, comparing the perturbed and steady state movement data. The results suggest that the power-law can describe human mobility in most cases and that human mobility patterns observed in steady states are often correlated with those in perturbed states, highlighting their inherent resilience. However, the quantitative analysis shows that this resilience has its limits and can fail in more powerful natural disasters. The findings from this study will deepen our understanding of the interaction between urban dwellers and civil infrastructure, improve our ability to predict human movement patterns during natural disasters, and facilitate contingency planning by policymakers. PMID:26820404
Velocity-curvature patterns limit human-robot physical interaction
Maurice, Pauline; Huber, Meghan E.; Hogan, Neville; Sternad, Dagmar
2018-01-01
Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration. PMID:29744380
Wang, Qi; Taylor, John E
2016-01-01
Natural disasters pose serious threats to large urban areas, therefore understanding and predicting human movements is critical for evaluating a population's vulnerability and resilience and developing plans for disaster evacuation, response and relief. However, only limited research has been conducted into the effect of natural disasters on human mobility. This study examines how natural disasters influence human mobility patterns in urban populations using individuals' movement data collected from Twitter. We selected fifteen destructive cases across five types of natural disaster and analyzed the human movement data before, during, and after each event, comparing the perturbed and steady state movement data. The results suggest that the power-law can describe human mobility in most cases and that human mobility patterns observed in steady states are often correlated with those in perturbed states, highlighting their inherent resilience. However, the quantitative analysis shows that this resilience has its limits and can fail in more powerful natural disasters. The findings from this study will deepen our understanding of the interaction between urban dwellers and civil infrastructure, improve our ability to predict human movement patterns during natural disasters, and facilitate contingency planning by policymakers.
Velocity-curvature patterns limit human-robot physical interaction.
Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar
2018-01-01
Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.
Efficient Decoding With Steady-State Kalman Filter in Neural Interface Systems
Malik, Wasim Q.; Truccolo, Wilson; Brown, Emery N.; Hochberg, Leigh R.
2011-01-01
The Kalman filter is commonly used in neural interface systems to decode neural activity and estimate the desired movement kinematics. We analyze a low-complexity Kalman filter implementation in which the filter gain is approximated by its steady-state form, computed offline before real-time decoding commences. We evaluate its performance using human motor cortical spike train data obtained from an intracortical recording array as part of an ongoing pilot clinical trial. We demonstrate that the standard Kalman filter gain converges to within 95% of the steady-state filter gain in 1.5 ± 0.5 s (mean ± s.d.). The difference in the intended movement velocity decoded by the two filters vanishes within 5 s, with a correlation coefficient of 0.99 between the two decoded velocities over the session length. We also find that the steady-state Kalman filter reduces the computational load (algorithm execution time) for decoding the firing rates of 25 ± 3 single units by a factor of 7.0 ± 0.9. We expect that the gain in computational efficiency will be much higher in systems with larger neural ensembles. The steady-state filter can thus provide substantial runtime efficiency at little cost in terms of estimation accuracy. This far more efficient neural decoding approach will facilitate the practical implementation of future large-dimensional, multisignal neural interface systems. PMID:21078582
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander R; Taraldsen, Gunnar; Grunewaldt, Kristine H; Støen, Ragnhild
2010-08-01
The aim of this study was to investigate the predictive value of a computer-based video analysis of the development of cerebral palsy (CP) in young infants. A prospective study of general movements used recordings from 30 high-risk infants (13 males, 17 females; mean gestational age 31wks, SD 6wks; range 23-42wks) between 10 and 15 weeks post term when fidgety movements should be present. Recordings were analysed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analyses. CP status was reported at 5 years. Thirteen infants developed CP (eight hemiparetic, four quadriparetic, one dyskinetic; seven ambulatory, three non-ambulatory, and three unknown function), of whom one had fidgety movements. Variability of the centroid of motion had a sensitivity of 85% and a specificity of 71% in identifying CP. By combining this with variables reflecting the amount of motion, specificity increased to 88%. Nine out of 10 children with CP, and for whom information about functional level was available, were correctly predicted with regard to ambulatory and non-ambulatory function. Prediction of CP can be provided by computer-based video analysis in young infants. The method may serve as an objective and feasible tool for early prediction of CP in high-risk infants.
Computer analysis of the leaf movements of pinto beans.
Hoshizaki, T; Hamner, K C
1969-07-01
Computer analysis was used for the detection of rhythmic components and the estimation of period length in leaf movement records. The results of this study indicated that spectral analysis can be profitably used to determine rhythmic components in leaf movements.In Pinto bean plants (Phaseolus vulgaris L.) grown for 28 days under continuous light of 750 ft-c and at a constant temperature of 28 degrees , there was only 1 highly significant rhythmic component in the leaf movements. The period of this rhythm was 27.3 hr. In plants grown at 20 degrees , there were 2 highly significant rhythmic components: 1 of 13.8 hr and a much stronger 1 of 27.3 hr. At 15 degrees , the highly significant rhythmic components were also 27.3 and 13.8 hr in length but were of equal intensity. Random movements less than 9 hr in length became very pronounced at this temperature. At 10 degrees , no significant rhythm was found in the leaf movements. At 5 degrees , the leaf movements ceased within 1 day.
Least-cost transportation networks predict spatial interaction of invasion vectors.
Drake, D Andrew R; Mandrak, Nicholas E
2010-12-01
Human-mediated dispersal among aquatic ecosystems often results in biotic transfer between drainage basins. Such activities may circumvent biogeographic factors, with considerable ecological, evolutionary, and economic implications. However, the efficacy of predictions concerning community changes following inter-basin movements are limited, often because the dispersal mechanism is poorly understood (e.g., quantified only partially). To date, spatial-interaction models that predict the movement of humans as vectors of biotic transfer have not incorporated patterns of human movement through transportation networks. As a necessary first step to determine the role of anglers as invasion vectors across a land-lake ecosystem, we investigate their movement potential within Ontario, Canada. To determine possible model improvements resulting from inclusion of network travel, spatial-interaction models were constructed using standard Euclidean (e.g., straight-line) distance measures and also with distances derived from least-cost routing of human transportation networks. Model comparisons determined that least-cost routing both provided the most parsimonious model and also excelled at forecasting spatial interactions, with a proportion of 0.477 total movement deviance explained. The distribution of movements was characterized by many relatively short to medium travel distances (median = 292.6 km) with fewer lengthier distances (75th percentile = 484.6 km, 95th percentile = 775.2 km); however, even the shortest movements were sufficient to overcome drainage-basin boundaries. Ranking of variables in order of their contribution within the most parsimonious model determined that distance traveled, origin outflow, lake attractiveness, and sportfish richness significantly influence movement patterns. Model improvements associated with least-cost routing of human transportation networks imply that patterns of human-mediated invasion are fundamentally linked to the spatial configuration and relative impedance of human transportation networks, placing increased importance on understanding their contribution to the invasion process.
Workstations for people with disabilities: an example of a virtual reality approach
Budziszewski, Paweł; Grabowski, Andrzej; Milanowicz, Marcin; Jankowski, Jarosław
2016-01-01
This article describes a method of adapting workstations for workers with motion disability using computer simulation and virtual reality (VR) techniques. A workstation for grinding spring faces was used as an example. It was adjusted for two people with a disabled right upper extremity. The study had two stages. In the first, a computer human model with a visualization of maximal arm reach and preferred workspace was used to develop a preliminary modification of a virtual workstation. In the second stage, an immersive VR environment was used to assess the virtual workstation and to add further modifications. All modifications were assessed by measuring the efficiency of work and the number of movements involved. The results of the study showed that a computer simulation could be used to determine whether a worker with a disability could access all important areas of a workstation and to propose necessary modifications. PMID:26651540
Learning to Interact with a Computer by Gaze
ERIC Educational Resources Information Center
Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji
2008-01-01
The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users' eye movement data during typing of 110 sentences. The experiment revealed that inefficient eye movements was dramatically reduced…
MOVANAID: An Interactive Aid for Analysis of Movement Capabilities.
ERIC Educational Resources Information Center
Cooper, George E.; And Others
A computer-drive interactive aid for movement analysis, called MOVANAID, has been developed to be of assistance in the performance of certain Army intelligence processing tasks in a tactical environment. It can compute fastest travel times and paths through road networks for military units of various types, as well as fastest times in which…
Aguirre-Ollinger, G; Colgate, J E; Peshkin, M A; Goswami, A
2011-03-01
Many of the current implementations of exoskeletons for the lower extremities are conceived to either augment the user's load-carrying capabilities or reduce muscle activation during walking. Comparatively little research has been conducted on enabling an exoskeleton to increase the agility of lower-limb movements. One obstacle in this regard is the inertia of the exoskeleton's mechanism, which tends to reduce the natural frequency of the human limbs. A control method is presented that produces an approximate compensation of the inertia of an exoskeleton's mechanism. The controller was tested on a statically mounted, single-degree-of-freedom (DOF) exoskeleton that assists knee flexion and extension. Test subjects performed multiple series of leg-swing movements in the context of a computer-based, sprint-like task. A large initial acceleration of the leg was needed for the subjects to track a virtual target on a computer screen. The uncompensated inertia of the exoskeleton mechanism slowed down the transient response of the subjects' limb, in comparison with trials performed without the exoskeleton. The subsequent use of emulated inertia compensation on the exoskeleton allowed the subjects to improve their transient response for the same task.
Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy
NASA Astrophysics Data System (ADS)
Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline
2016-12-01
Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α 0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.
A Computational Clonal Analysis of the Developing Mouse Limb Bud
Marcon, Luciano; Arqués, Carlos G.; Torres, Miguel S.; Sharpe, James
2011-01-01
A comprehensive spatio-temporal description of the tissue movements underlying organogenesis would be an extremely useful resource to developmental biology. Clonal analysis and fate mappings are popular experiments to study tissue movement during morphogenesis. Such experiments allow cell populations to be labeled at an early stage of development and to follow their spatial evolution over time. However, disentangling the cumulative effects of the multiple events responsible for the expansion of the labeled cell population is not always straightforward. To overcome this problem, we develop a novel computational method that combines accurate quantification of 2D limb bud morphologies and growth modeling to analyze mouse clonal data of early limb development. Firstly, we explore various tissue movements that match experimental limb bud shape changes. Secondly, by comparing computational clones with newly generated mouse clonal data we are able to choose and characterize the tissue movement map that better matches experimental data. Our computational analysis produces for the first time a two dimensional model of limb growth based on experimental data that can be used to better characterize limb tissue movement in space and time. The model shows that the distribution and shapes of clones can be described as a combination of anisotropic growth with isotropic cell mixing, without the need for lineage compartmentalization along the AP and PD axis. Lastly, we show that this comprehensive description can be used to reassess spatio-temporal gene regulations taking tissue movement into account and to investigate PD patterning hypothesis. PMID:21347315
On-line confidence monitoring during decision making.
Dotan, Dror; Meyniel, Florent; Dehaene, Stanislas
2018-02-01
Humans can readily assess their degree of confidence in their decisions. Two models of confidence computation have been proposed: post hoc computation using post-decision variables and heuristics, versus online computation using continuous assessment of evidence throughout the decision-making process. Here, we arbitrate between these theories by continuously monitoring finger movements during a manual sequential decision-making task. Analysis of finger kinematics indicated that subjects kept separate online records of evidence and confidence: finger deviation continuously reflected the ongoing accumulation of evidence, whereas finger speed continuously reflected the momentary degree of confidence. Furthermore, end-of-trial finger speed predicted the post-decisional subjective confidence rating. These data indicate that confidence is computed on-line, throughout the decision process. Speed-confidence correlations were previously interpreted as a post-decision heuristics, whereby slow decisions decrease subjective confidence, but our results suggest an adaptive mechanism that involves the opposite causality: by slowing down when unconfident, participants gain time to improve their decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
Infant and Adult Perceptions of Possible and Impossible Body Movements: An Eye-Tracking Study
ERIC Educational Resources Information Center
Morita, Tomoyo; Slaughter, Virginia; Katayama, Nobuko; Kitazaki, Michiteru; Kakigi, Ryusuke; Itakura, Shoji
2012-01-01
This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N = 14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old…
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2016-10-01
With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.
Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P
2015-01-01
The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.
Biomimetics of human movement: functional or aesthetic?
Harris, Christopher M
2009-09-01
How should robotic or prosthetic arms be programmed to move? Copying human smooth movements is popular in synthetic systems, but what does this really achieve? We cannot address these biomimetic issues without a deep understanding of why natural movements are so stereotyped. In this article, we distinguish between 'functional' and 'aesthetic' biomimetics. Functional biomimetics requires insight into the problem that nature has solved and recognition that a similar problem exists in the synthetic system. In aesthetic biomimetics, nature is copied for its own sake and no insight is needed. We examine the popular minimum jerk (MJ) model that has often been used to generate smooth human-like point-to-point movements in synthetic arms. The MJ model was originally justified as maximizing 'smoothness'; however, it is also the limiting optimal trajectory for a wide range of cost functions for brief movements, including the minimum variance (MV) model, where smoothness is a by-product of optimizing the speed-accuracy trade-off imposed by proportional noise (PN: signal-dependent noise with the standard deviation proportional to mean). PN is unlikely to be dominant in synthetic systems, and the control objectives of natural movements (speed and accuracy) would not be optimized in synthetic systems by human-like movements. Thus, employing MJ or MV controllers in robotic arms is just aesthetic biomimetics. For prosthetic arms, the goal is aesthetic by definition, but it is still crucial to recognize that MV trajectories and PN are deeply embedded in the human motor system. Thus, PN arises at the neural level, as a recruitment strategy of motor units and probably optimizes motor neuron noise. Human reaching is under continuous adaptive control. For prosthetic devices that do not have this natural architecture, natural plasticity would drive the system towards unnatural movements. We propose that a truly neuromorphic system with parallel force generators (muscle fibres) and noisy drivers (motor neurons) would permit plasticity to adapt the control of a prosthetic limb towards human-like movement.
Computational approaches to motor learning by imitation.
Schaal, Stefan; Ijspeert, Auke; Billard, Aude
2003-01-01
Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions. PMID:12689379
Multi-axis control based on movement control cards in NC systems
NASA Astrophysics Data System (ADS)
Jiang, Tingbiao; Wei, Yunquan
2005-12-01
Today most movement control cards need special control software of topper computers and are only suitable for fixed-axis controls. Consequently, the number of axes which can be controlled is limited. Advanced manufacture technology develops at a very high speed, and that development brings forth. New requirements for movement control in mechanisms and electronics. This paper introduces products of the 5th generation of movement control cards, PMAC 2A-PC/104, made by the Delta Tau Company in the USA. Based on an analysis of PMAC 2A-PC/104, this paper first describes two aspects relevant to the hardware structure of movement control cards and the interrelated software of the topper computers. Then, two methods are presented for solving these problems. The first method is to set limit switches on the movement control cards; all of them can be used to control each moving axis. The second method is to program applied software with existing programming language (for example, VC ++, Visual Basic, Delphi, and so forth). This program is much easier to operate and expand by its users. By using a limit switch, users can choose different axes in movement control cards. Also, users can change parts of the parameters in the control software of topper computers to realize different control axes. Combining these 2 methods proves to be convenient for realizing multi-axis control in numerical control systems.
A Production System Model of Capturing Reactive Moving Targets. M.S. Thesis
NASA Technical Reports Server (NTRS)
Jagacinski, R. J.; Plamondon, B. D.; Miller, R. A.
1984-01-01
Subjects manipulated a control stick to position a cursor over a moving target that reacted with a computer-generated escape strategy. The cursor movements were described at two levels of abstraction. At the upper level, a production system described transitions among four modes of activity; rapid acquisition, close following, a predictive mode, and herding. Within each mode, differential equations described trajectory-generating mechanisms. A simulation of this two-level model captures the targets in a manner resembling the episodic time histories of human subjects.
Research on wheelchair robot control system based on EOG
NASA Astrophysics Data System (ADS)
Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo
2018-04-01
The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.
1993-07-09
real-time simulation capabilities, highly non -linear control devices, work space path planing, active control of machine flexibilities and reliability...P.M., "The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement," Journal of Experimental Psychology, Vol 47, No...driven many research groups in the challenging problem of flexible sy,;tems with an increasing interaction with finite element methodologies. Basic
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Effects of External Loads on Human Head Movement Control Systems
NASA Technical Reports Server (NTRS)
Nam, M. H.; Choi, O. M.
1984-01-01
The central and reflexive control strategies underlying movements were elucidated by studying the effects of external loads on human head movement control systems. Some experimental results are presented on dynamic changes weigh the addition of aviation helmet (SPH4) and lead weights (6 kg). Intended time-optimal movements, their dynamics and electromyographic activity of neck muscles in normal movements, and also in movements made with external weights applied to the head were measured. It was observed that, when the external loads were added, the subject went through complex adapting processes and the head movement trajectory and its derivatives reached steady conditions only after transient adapting period. The steady adapted state was reached after 15 to 20 seconds (i.e., 5 to 6 movements).
Arjunan, Sridhar Poosapadi; Kumar, Dinesh Kant
2010-10-21
Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73%. The results show that the MFL and FD of a single channel sEMG recorded from the forearm can be used to accurately identify a set of finger and wrist flexions even when the muscle activity is very weak. A comparison with other features demonstrates that this feature set offers a dramatic improvement in the accuracy of identification of the wrist and finger movements. It is proposed that such a system could be used to control a prosthetic hand or for a human computer interface.
2010-01-01
Background Identifying finger and wrist flexion based actions using a single channel surface electromyogram (sEMG) can lead to a number of applications such as sEMG based controllers for near elbow amputees, human computer interface (HCI) devices for elderly and for defence personnel. These are currently infeasible because classification of sEMG is unreliable when the level of muscle contraction is low and there are multiple active muscles. The presence of noise and cross-talk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion. This paper reports the use of fractal properties of sEMG to reliably identify individual wrist and finger flexion, overcoming the earlier shortcomings. Methods SEMG signal was recorded when the participant maintained pre-specified wrist and finger flexion movements for a period of time. Various established sEMG signal parameters such as root mean square (RMS), Mean absolute value (MAV), Variance (VAR) and Waveform length (WL) and the proposed fractal features: fractal dimension (FD) and maximum fractal length (MFL) were computed. Multi-variant analysis of variance (MANOVA) was conducted to determine the p value, indicative of the significance of the relationships between each of these parameters with the wrist and finger flexions. Classification accuracy was also computed using the trained artificial neural network (ANN) classifier to decode the desired subtle movements. Results The results indicate that the p value for the proposed feature set consisting of FD and MFL of single channel sEMG was 0.0001 while that of various combinations of the five established features ranged between 0.009 - 0.0172. From the accuracy of classification by the ANN, the average accuracy in identifying the wrist and finger flexions using the proposed feature set of single channel sEMG was 90%, while the average accuracy when using a combination of other features ranged between 58% and 73%. Conclusions The results show that the MFL and FD of a single channel sEMG recorded from the forearm can be used to accurately identify a set of finger and wrist flexions even when the muscle activity is very weak. A comparison with other features demonstrates that this feature set offers a dramatic improvement in the accuracy of identification of the wrist and finger movements. It is proposed that such a system could be used to control a prosthetic hand or for a human computer interface. PMID:20964863
Human Purposive Movement Theory
2012-03-01
theory and provides examples of developmental and operational technologies that could use this theory in common settings. 15. SUBJECT TERMS human ... activity , prediction of behavior, human algorithms purposive movement theory 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18
Human observers are biased in judging the angular approach of a projectile.
Welchman, Andrew E; Tuck, Val L; Harris, Julie M
2004-01-01
How do we decide whether an object approaching us will hit us? The optic array provides information sufficient for us to determine the approaching trajectory of a projectile. However, when using binocular information, observers report that trajectories near the mid-sagittal plane are wider than they actually are. Here we extend this work to consider stimuli containing additional depth cues. We measure observers' estimates of trajectory direction first for computer rendered, stereoscopically presented, rich-cue objects, and then for real objects moving in the world. We find that, under both rich cue conditions and with real moving objects, observers show positive bias, overestimating the angle of approach when movement is near the mid-sagittal plane. The findings question whether the visual system, using both binocular and monocular cues to depth, can make explicit estimates of the 3-D location and movement of objects in depth.
Acquisition and reacquisition of motor coordination in musicians.
Furuya, Shinichi; Altenmüller, Eckart
2015-03-01
Precise control of movement timing plays a key role in musical performance. This motor skill requires coordination across multiple joints and muscles, which is acquired through extensive musical training from childhood. However, extensive training has a potential risk of causing neurological disorders that impair fine motor control, such as task-specific tremor and focal dystonia. Recent technological advances in measurement and analysis of biological data, as well as noninvasive manipulation of neuronal activities, have promoted the understanding of computational and neurophysiological mechanisms underlying acquisition, loss, and reacquisition of dexterous movements through musical practice and rehabilitation. This paper aims to provide an overview of the behavioral and neurophysiological basis of motor virtuosity and disorder in musicians, representative extremes of human motor skill. We also report novel evidence of effects of noninvasive neurorehabilitation that combined transcranial direct-current stimulation and motor rehabilitation over multiple days on musician's dystonia, which offers a promising therapeutic means. © 2015 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Tramo, Mark Jude
2004-05-01
The acquisition and maintenance of fine-motor skills underlying musical instrument performance rely on the development, integration, and plasticity of neural systems localized within specific subregions of the cerebral cortex. Cortical representations of a motor sequence, such as a sequence of finger movements along the keys of a saxophone, take shape before the figure sequence occurs. The temporal pattern and spatial coordinates are computed by networks of neurons before and during the movements. When a finger sequence is practiced over and over, performance gets faster and more accurate, probably because cortical neurons generating the sequence increase in spatial extent, their electrical discharges become more synchronous, or both. By combining experimental methods such as single- and multi-neuron recordings, focal stimulation, microanatomical tracers, gross morphometry, evoked potentials, and functional imaging in humans and nonhuman primates, neuroscientists are gaining insights into the cortical physiology, anatomy, and plasticity of musical instrument performance.
Development and application of virtual reality for man/systems integration
NASA Technical Reports Server (NTRS)
Brown, Marcus
1991-01-01
While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.
Human hippocampal theta power indicates movement onset and distance travelled
Bird, Chris M.; Gollwitzer, Stephanie; Rodionov, Roman; Diehl, Beate; McEvoy, Andrew W.; Walker, Matthew C.; Burgess, Neil
2017-01-01
Theta frequency oscillations in the 6- to 10-Hz range dominate the rodent hippocampal local field potential during translational movement, suggesting that theta encodes self-motion. Increases in theta power have also been identified in the human hippocampus during both real and virtual movement but appear as transient bursts in distinct high- and low-frequency bands, and it is not yet clear how these bursts relate to the sustained oscillation observed in rodents. Here, we examine depth electrode recordings from the temporal lobe of 13 presurgical epilepsy patients performing a self-paced spatial memory task in a virtual environment. In contrast to previous studies, we focus on movement-onset periods that incorporate both initial acceleration and an immediately preceding stationary interval associated with prominent theta oscillations in the rodent hippocampal formation. We demonstrate that movement-onset periods are associated with a significant increase in both low (2–5 Hz)- and high (6–9 Hz)-frequency theta power in the human hippocampus. Similar increases in low- and high-frequency theta power are seen across lateral temporal lobe recording sites and persist throughout the remainder of movement in both regions. In addition, we show that movement-related theta power is greater both before and during longer paths, directly implicating human hippocampal theta in the encoding of translational movement. These findings strengthen the connection between studies of theta-band activity in rodents and humans and offer additional insight into the neural mechanisms of spatial navigation. PMID:29078334
Zavala, Baltazar; Pogosyan, Alek; Ashkan, Keyoumars; Zrinzo, Ludvic; Foltynie, Thomas; Limousin, Patricia; Brown, Peter
2014-01-01
Monitoring and evaluating movement errors to guide subsequent movements is a critical feature of normal motor control. Previously, we showed that the postmovement increase in electroencephalographic (EEG) beta power over the sensorimotor cortex reflects neural processes that evaluate motor errors consistent with Bayesian inference (Tan et al., 2014). Whether such neural processes are limited to this cortical region or involve the basal ganglia is unclear. Here, we recorded EEG over the cortex and local field potential (LFP) activity in the subthalamic nucleus (STN) from electrodes implanted in patients with Parkinson's disease, while they moved a joystick-controlled cursor to visual targets displayed on a computer screen. After movement offsets, we found increased beta activity in both local STN LFP and sensorimotor cortical EEG and in the coupling between the two, which was affected by both error magnitude and its contextual saliency. The postmovement increase in the coupling between STN and cortex was dominated by information flow from sensorimotor cortex to STN. However, an information drive appeared from STN to sensorimotor cortex in the first phase of the adaptation, when a constant rotation was applied between joystick inputs and cursor outputs. The strength of the STN to cortex drive correlated with the degree of adaption achieved across subjects. These results suggest that oscillatory activity in the beta band may dynamically couple the sensorimotor cortex and basal ganglia after movements. In particular, beta activity driven from the STN to cortex indicates task-relevant movement errors, information that may be important in modifying subsequent motor responses. PMID:25505327
The virtual craniofacial patient: 3D jaw modeling and animation.
Enciso, Reyes; Memon, Ahmed; Fidaleo, Douglas A; Neumann, Ulrich; Mah, James
2003-01-01
In this paper, we present new developments in the area of 3D human jaw modeling and animation. CT (Computed Tomography) scans have traditionally been used to evaluate patients with dental implants, assess tumors, cysts, fractures and surgical procedures. More recently this data has been utilized to generate models. Researchers have reported semi-automatic techniques to segment and model the human jaw from CT images and manually segment the jaw from MRI images. Recently opto-electronic and ultrasonic-based systems (JMA from Zebris) have been developed to record mandibular position and movement. In this research project we introduce: (1) automatic patient-specific three-dimensional jaw modeling from CT data and (2) three-dimensional jaw motion simulation using jaw tracking data from the JMA system (Zebris).
Sensing Passive Eye Response to Impact Induced Head Acceleration Using MEMS IMUs.
Meng, Yuan; Bottenfield, Brent; Bolding, Mark; Liu, Lei; Adams, Mark L
2018-02-01
The eye may act as a surrogate for the brain in response to head acceleration during an impact. Passive eye movements in a dynamic system are sensed by microelectromechanical systems (MEMS) inertial measurement units (IMU) in this paper. The technique is validated using a three-dimensional printed scaled human skull model and on human volunteers by performing drop-and-impact experiments with ribbon-style flexible printed circuit board IMUs inserted in the eyes and reference IMUs on the heads. Data are captured by a microcontroller unit and processed using data fusion. Displacements are thus estimated and match the measured parameters. Relative accelerations and displacements of the eye to the head are computed indicating the influence of the concussion causing impacts.
Bazzini, Ariel A; Johnstone, Timothy G; Christiano, Romain; Mackowiak, Sebastian D; Obermayer, Benedikt; Fleming, Elizabeth S; Vejnar, Charles E; Lee, Miler T; Rajewsky, Nikolaus; Walther, Tobias C; Giraldez, Antonio J
2014-01-01
Identification of the coding elements in the genome is a fundamental step to understanding the building blocks of living systems. Short peptides (< 100 aa) have emerged as important regulators of development and physiology, but their identification has been limited by their size. We have leveraged the periodicity of ribosome movement on the mRNA to define actively translated ORFs by ribosome footprinting. This approach identifies several hundred translated small ORFs in zebrafish and human. Computational prediction of small ORFs from codon conservation patterns corroborates and extends these findings and identifies conserved sequences in zebrafish and human, suggesting functional peptide products (micropeptides). These results identify micropeptide-encoding genes in vertebrates, providing an entry point to define their function in vivo. PMID:24705786
Dance choreography is coordinated with song repertoire in a complex avian display.
Dalziell, Anastasia H; Peters, Richard A; Cockburn, Andrew; Dorland, Alexandra D; Maisey, Alex C; Magrath, Robert D
2013-06-17
All human cultures have music and dance, and the two activities are so closely integrated that many languages use just one word to describe both. Recent research points to a deep cognitive connection between music and dance-like movements in humans, fueling speculation that music and dance have coevolved and prompting the need for studies of audiovisual displays in other animals. However, little is known about how nonhuman animals integrate acoustic and movement display components. One striking property of human displays is that performers coordinate dance with music by matching types of dance movements with types of music, as when dancers waltz to waltz music. Here, we show that a bird also temporally coordinates a repertoire of song types with a repertoire of dance-like movements. During displays, male superb lyrebirds (Menura novaehollandiae) sing four different song types, matching each with a unique set of movements and delivering song and dance types in a predictable sequence. Crucially, display movements are both unnecessary for the production of sound and voluntary, because males sometimes sing without dancing. Thus, the coordination of independently produced repertoires of acoustic and movement signals is not a uniquely human trait. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People
NASA Astrophysics Data System (ADS)
Thomik, Andreas; Faisal, A. Aldo
2015-03-01
Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).
2017-01-01
Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this. PMID:28777822
Demšar, Urška; Çöltekin, Arzu
2017-01-01
Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this.
Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A
2016-05-19
The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.
Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter
2017-01-01
Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.
Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter
2017-01-01
Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512
The Structure of Borders in a Small World
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-01-01
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity. PMID:21124970
The structure of borders in a small world.
Thiemann, Christian; Theis, Fabian; Grady, Daniel; Brune, Rafael; Brockmann, Dirk
2010-11-18
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, it is unclear if existing administrative subdivisions that typically evolved decades ago still reflect the most plausible organizational structure of today. The complexity of modern human communication, the ease of long-distance movement, and increased interaction across political borders complicate the operational definition and assessment of geographic borders that optimally reflect the multi-scale nature of today's human connectivity patterns. What border structures emerge directly from the interplay of scales in human interactions is an open question. Based on a massive proxy dataset, we analyze a multi-scale human mobility network and compute effective geographic borders inherent to human mobility patterns in the United States. We propose two computational techniques for extracting these borders and for quantifying their strength. We find that effective borders only partially overlap with existing administrative borders, and show that some of the strongest mobility borders exist in unexpected regions. We show that the observed structures cannot be generated by gravity models for human traffic. Finally, we introduce the concept of link significance that clarifies the observed structure of effective borders. Our approach represents a novel type of quantitative, comparative analysis framework for spatially embedded multi-scale interaction networks in general and may yield important insight into a multitude of spatiotemporal phenomena generated by human activity.
Ethical Considerations in Human Movement Research.
ERIC Educational Resources Information Center
Olivier, Steve
1995-01-01
Highlights ethical issues for human subject research, identifying principles that form the construct of a code of research ethics and evaluating against this construct past human experimentation and current research in human movement studies. The efficacy of legislation and self-regulation is examined. Particular attention is given to the context…
Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto
2016-01-01
We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.
Dumuid, Dorothea; Maher, Carol; Lewis, Lucy K; Stanford, Tyman E; Martín Fernández, Josep Antoni; Ratcliffe, Julie; Katzmarzyk, Peter T; Barreira, Tiago V; Chaput, Jean-Philippe; Fogelholm, Mikael; Hu, Gang; Maia, José; Sarmiento, Olga L; Standage, Martyn; Tremblay, Mark S; Tudor-Locke, Catrine; Olds, Timothy
2018-06-01
Health-related quality of life has been related to physical activity, sedentary behavior, and sleep among children from developed nations. These relationships have rarely been assessed in developing nations, nor have behaviors been considered in their true context, as mutually exclusive and exhaustive parts of the movement behavior composition. This study aimed to explore whether children's health-related quality of life is related to their movement behavior composition and if the relationship differs according to human development index. Children aged 9-11 years (n = 5855), from the 12-nation cross-sectional observational International Study of Childhood Obesity, Lifestyle and the Environment 2011-2013, self-reported their health-related quality of life (KIDSCREEN-10). Daily movement behaviors were from 24-h, 7-day accelerometry. Isometric log-ratio mixed-effect linear models were used to calculate estimates for difference in health-related quality of life for the reallocation of time between daily movement behaviors. Children from countries of higher human development index reported stronger positive relationships between health-related quality of life and moderate-to-vigorous physical activity, relative to the remaining behaviors (r = 0.75, p = 0.005) than those from lower human development index countries. In the very high human development index strata alone, health-related quality of life was significantly related to the movement behavior composition (p = 0.005), with moderate-to-vigorous physical activity (relative to remaining behaviors) being positively associated with health-related quality of life. The relationship between children's health-related quality of life and their movement behaviors is moderated by their country's human development index. This should be considered when 24-h movement behavior guidelines are developed for children around the world.
Anatomy of emotion: a 3D study of facial mimicry.
Ferrario, V F; Sforza, C
2007-01-01
Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.
Charbonnier, Caecilia; Kolo, Frank C; Duthon, Victoria B; Magnenat-Thalmann, Nadia; Becker, Christoph D; Hoffmeyer, Pierre; Menetrey, Jacques
2011-03-01
Early hip osteoarthritis in dancers could be explained by femoroacetabular impingements. However, there is a lack of validated noninvasive methods and dynamic studies to ascertain impingement during motion. Moreover, it is unknown whether the femoral head and acetabulum are congruent in typical dancing positions. The practice of some dancing movements could cause a loss of hip joint congruence and recurrent impingements, which could lead to early osteoarthritis. Descriptive laboratory study. Eleven pairs of female dancer's hips were motion captured with an optical tracking system while performing 6 different dancing movements. The resulting computed motions were applied to patient-specific hip joint 3-dimensional models based on magnetic resonance images. While visualizing the dancer's hip in motion, the authors detected impingements using computer-assisted techniques. The range of motion and congruence of the hip joint were also quantified in those 6 recorded dancing movements. The frequency of impingement and subluxation varied with the type of movement. Four dancing movements (développé à la seconde, grand écart facial, grand écart latéral, and grand plié) seem to induce significant stress in the hip joint, according to the observed high frequency of impingement and amount of subluxation. The femoroacetabular translations were high (range, 0.93 to 6.35 mm). For almost all movements, the computed zones of impingement were mainly located in the superior or posterosuperior quadrant of the acetabulum, which was relevant with respect to radiologically diagnosed damaged zones in the labrum. All dancers' hips were morphologically normal. Impingements and subluxations are frequently observed in typical ballet movements, causing cartilage hypercompression. These movements should be limited in frequency. The present study indicates that some dancing movements could damage the hip joint, which could lead to early osteoarthritis.
The future of computer-aided sperm analysis
Mortimer, Sharon T; van der Horst, Gerhard; Mortimer, David
2015-01-01
Computer-aided sperm analysis (CASA) technology was developed in the late 1980s for analyzing sperm movement characteristics or kinematics and has been highly successful in enabling this field of research. CASA has also been used with great success for measuring semen characteristics such as sperm concentration and proportions of progressive motility in many animal species, including wide application in domesticated animal production laboratories and reproductive toxicology. However, attempts to use CASA for human clinical semen analysis have largely met with poor success due to the inherent difficulties presented by many human semen samples caused by sperm clumping and heavy background debris that, until now, have precluded accurate digital image analysis. The authors review the improved capabilities of two modern CASA platforms (Hamilton Thorne CASA-II and Microptic SCA6) and consider their current and future applications with particular reference to directing our focus towards using this technology to assess functional rather than simple descriptive characteristics of spermatozoa. Specific requirements for validating CASA technology as a semi-automated system for human semen analysis are also provided, with particular reference to the accuracy and uncertainty of measurement expected of a robust medical laboratory test for implementation in clinical laboratories operating according to modern accreditation standards. PMID:25926614
Moving in the Anthropocene: Global reductions in terrestrial mammalian movements.
Tucker, Marlee A; Böhning-Gaese, Katrin; Fagan, William F; Fryxell, John M; Van Moorter, Bram; Alberts, Susan C; Ali, Abdullahi H; Allen, Andrew M; Attias, Nina; Avgar, Tal; Bartlam-Brooks, Hattie; Bayarbaatar, Buuveibaatar; Belant, Jerrold L; Bertassoni, Alessandra; Beyer, Dean; Bidner, Laura; van Beest, Floris M; Blake, Stephen; Blaum, Niels; Bracis, Chloe; Brown, Danielle; de Bruyn, P J Nico; Cagnacci, Francesca; Calabrese, Justin M; Camilo-Alves, Constança; Chamaillé-Jammes, Simon; Chiaradia, Andre; Davidson, Sarah C; Dennis, Todd; DeStefano, Stephen; Diefenbach, Duane; Douglas-Hamilton, Iain; Fennessy, Julian; Fichtel, Claudia; Fiedler, Wolfgang; Fischer, Christina; Fischhoff, Ilya; Fleming, Christen H; Ford, Adam T; Fritz, Susanne A; Gehr, Benedikt; Goheen, Jacob R; Gurarie, Eliezer; Hebblewhite, Mark; Heurich, Marco; Hewison, A J Mark; Hof, Christian; Hurme, Edward; Isbell, Lynne A; Janssen, René; Jeltsch, Florian; Kaczensky, Petra; Kane, Adam; Kappeler, Peter M; Kauffman, Matthew; Kays, Roland; Kimuyu, Duncan; Koch, Flavia; Kranstauber, Bart; LaPoint, Scott; Leimgruber, Peter; Linnell, John D C; López-López, Pascual; Markham, A Catherine; Mattisson, Jenny; Medici, Emilia Patricia; Mellone, Ugo; Merrill, Evelyn; de Miranda Mourão, Guilherme; Morato, Ronaldo G; Morellet, Nicolas; Morrison, Thomas A; Díaz-Muñoz, Samuel L; Mysterud, Atle; Nandintsetseg, Dejid; Nathan, Ran; Niamir, Aidin; Odden, John; O'Hara, Robert B; Oliveira-Santos, Luiz Gustavo R; Olson, Kirk A; Patterson, Bruce D; Cunha de Paula, Rogerio; Pedrotti, Luca; Reineking, Björn; Rimmler, Martin; Rogers, Tracey L; Rolandsen, Christer Moe; Rosenberry, Christopher S; Rubenstein, Daniel I; Safi, Kamran; Saïd, Sonia; Sapir, Nir; Sawyer, Hall; Schmidt, Niels Martin; Selva, Nuria; Sergiel, Agnieszka; Shiilegdamba, Enkhtuvshin; Silva, João Paulo; Singh, Navinder; Solberg, Erling J; Spiegel, Orr; Strand, Olav; Sundaresan, Siva; Ullmann, Wiebke; Voigt, Ulrich; Wall, Jake; Wattles, David; Wikelski, Martin; Wilmers, Christopher C; Wilson, John W; Wittemyer, George; Zięba, Filip; Zwijacz-Kozica, Tomasz; Mueller, Thomas
2018-01-26
Animal movement is fundamental for ecosystem functioning and species survival, yet the effects of the anthropogenic footprint on animal movements have not been estimated across species. Using a unique GPS-tracking database of 803 individuals across 57 species, we found that movements of mammals in areas with a comparatively high human footprint were on average one-half to one-third the extent of their movements in areas with a low human footprint. We attribute this reduction to behavioral changes of individual animals and to the exclusion of species with long-range movements from areas with higher human impact. Global loss of vagility alters a key ecological trait of animals that affects not only population persistence but also ecosystem processes such as predator-prey interactions, nutrient cycling, and disease transmission. Copyright © 2018, The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
[The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].
Ganin, I P; Kaplan, A Ia
2014-01-01
The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.
Evaluation of the leap motion controller as a new contact-free pointing device.
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2014-12-24
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8% for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC.
Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device
Bachmann, Daniel; Weichert, Frank; Rinkenauer, Gerhard
2015-01-01
This paper presents a Fitts' law-based analysis of the user's performance in selection tasks with the Leap Motion Controller compared with a standard mouse device. The Leap Motion Controller (LMC) is a new contact-free input system for gesture-based human-computer interaction with declared sub-millimeter accuracy. Up to this point, there has hardly been any systematic evaluation of this new system available. With an error rate of 7.8 % for the LMC and 2.8% for the mouse device, movement times twice as large as for a mouse device and high overall effort ratings, the Leap Motion Controller's performance as an input device for everyday generic computer pointing tasks is rather limited, at least with regard to the selection recognition provided by the LMC. PMID:25609043
Human sperm pattern of movement during chemotactic re-orientation towards a progesterone source
Blengini, Cecilia Soledad; Teves, Maria Eugenia; Uñates, Diego Rafael; Guidobaldi, Héctor Alejandro; Gatica, Laura Virginia; Giojalas, Laura Cecilia
2011-01-01
Human spermatozoa may chemotactically find out the egg by following an increasing gradient of attractant molecules. Although human spermatozoa have been observed to show several of the physiological characteristics of chemotaxis, the chemotactic pattern of movement has not been easy to describe. However, it is apparent that chemotactic cells may be identified while returning to the attractant source. This study characterizes the pattern of movement of human spermatozoa during chemotactic re-orientation towards a progesterone source, which is a physiological attractant candidate. By means of videomicroscopy and image analysis, a chemotactic pattern of movement was identified as the spermatozoon returned towards the source of a chemotactic concentration of progesterone (10 pmol l−1). First, as a continuation of its original path, the spermatozoon swims away from the progesterone source with linear movement and then turns back with a transitional movement that can be characterized by an increased velocity and decreased linearity. This sperm behaviour may help the spermatozoon to re-orient itself towards a progesterone source and may be used to identify the few cells that are undergoing chemotaxis at a given time. PMID:21765441
Goostrey, Sonya; Treleaven, Julia; Johnston, Venerina
2014-05-01
This study evaluated the impact on neck movement and muscle activity of placing documents in three commonly used locations: in-line, flat desktop left of the keyboard and laterally placed level with the computer screen. Neck excursion during three standard head movements between the computer monitor and each document location and neck extensor and upper trapezius muscle activity during a 5 min typing task for each of the document locations was measured in 20 healthy participants. Results indicated that muscle activity and neck flexion were least when documents were placed laterally suggesting it may be the optimal location. The desktop option produced both the greatest neck movement and muscle activity in all muscle groups. The in-line document location required significantly more neck flexion but less lateral flexion and rotation than the laterally placed document. Evaluation of other holders is needed to guide decision making for this commonly used office equipment. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Lawton, Teri B.
1989-01-01
A cortical neural network that computes the visibility of shifts in the direction of movement is proposed. The network computes: (1) the magnitude of the position difference between the test and background patterns, (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern, and (3) using global processes that pool the output from paired even- and odd-symmetric simple and complex cells across the spatial extent of the background frame of reference the direction a test pattern moved relative to a textured background. Evidence that magnocellular pathways are used to discriminate the direction of movement is presented. Since magnocellular pathways are used to discriminate the direction of movement, this task is not affected by small pattern changes such as jitter, short presentations, blurring, and different background contrasts that result when the veiling illumination in a scene changes.
Comparison of a Conceptual Groundwater Model and Physically Based Groundwater Mode
NASA Astrophysics Data System (ADS)
Yang, J.; Zammit, C.; Griffiths, J.; Moore, C.; Woods, R. A.
2017-12-01
Groundwater is a vital resource for human activities including agricultural practice and urban water demand. Hydrologic modelling is an important way to study groundwater recharge, movement and discharge, and its response to both human activity and climate change. To understand the groundwater hydrologic processes nationally in New Zealand, we have developed a conceptually based groundwater flow model, which is fully integrated into a national surface-water model (TopNet), and able to simulate groundwater recharge, movement, and interaction with surface water. To demonstrate the capability of this groundwater model (TopNet-GW), we applied the model to an irrigated area with water shortage and pollution problems in the upper Ruamahanga catchment in Great Wellington Region, New Zealand, and compared its performance with a physically-based groundwater model (MODFLOW). The comparison includes river flow at flow gauging sites, and interaction between groundwater and river. Results showed that the TopNet-GW produced similar flow and groundwater interaction patterns as the MODFLOW model, but took less computation time. This shows the conceptually-based groundwater model has the potential to simulate national groundwater process, and could be used as a surrogate for the more physically based model.
Quantifying the quality of hand movement in stroke patients through three-dimensional curvature.
Osu, Rieko; Ota, Kazuko; Fujiwara, Toshiyuki; Otaka, Yohei; Kawato, Mitsuo; Liu, Meigen
2011-10-31
To more accurately evaluate rehabilitation outcomes in stroke patients, movement irregularities should be quantified. Previous work in stroke patients has revealed a reduction in the trajectory smoothness and segmentation of continuous movements. Clinically, the Stroke Impairment Assessment Set (SIAS) evaluates the clumsiness of arm movements using an ordinal scale based on the examiner's observations. In this study, we focused on three-dimensional curvature of hand trajectory to quantify movement, and aimed to establish a novel measurement that is independent of movement duration. We compared the proposed measurement with the SIAS score and the jerk measure representing temporal smoothness. Sixteen stroke patients with SIAS upper limb proximal motor function (Knee-Mouth test) scores ranging from 2 (incomplete performance) to 4 (mild clumsiness) were recruited. Nine healthy participant with a SIAS score of 5 (normal) also participated. Participants were asked to grasp a plastic glass and repetitively move it from the lap to the mouth and back at a conformable speed for 30 s, during which the hand movement was measured using OPTOTRAK. The position data was numerically differentiated and the three-dimensional curvature was computed. To compare against a previously proposed measure, the mean squared jerk normalized by its minimum value was computed. Age-matched healthy participants were instructed to move the glass at three different movement speeds. There was an inverse relationship between the curvature of the movement trajectory and the patient's SIAS score. The median of the -log of curvature (MedianLC) correlated well with the SIAS score, upper extremity subsection of Fugl-Meyer Assessment, and the jerk measure in the paretic arm. When the healthy participants moved slowly, the increase in the jerk measure was comparable to the paretic movements with a SIAS score of 2 to 4, while the MedianLC was distinguishable from paretic movements. Measurement based on curvature was able to quantify movement irregularities and matched well with the examiner's observations. The results suggest that the quality of paretic movements is well characterized using spatial smoothness represented by curvature. The smaller computational costs associated with this measurement suggest that this method has potential clinical utility. © 2011 Osu et al; licensee BioMed Central Ltd.
Quantifying the quality of hand movement in stroke patients through three-dimensional curvature
2011-01-01
Background To more accurately evaluate rehabilitation outcomes in stroke patients, movement irregularities should be quantified. Previous work in stroke patients has revealed a reduction in the trajectory smoothness and segmentation of continuous movements. Clinically, the Stroke Impairment Assessment Set (SIAS) evaluates the clumsiness of arm movements using an ordinal scale based on the examiner's observations. In this study, we focused on three-dimensional curvature of hand trajectory to quantify movement, and aimed to establish a novel measurement that is independent of movement duration. We compared the proposed measurement with the SIAS score and the jerk measure representing temporal smoothness. Methods Sixteen stroke patients with SIAS upper limb proximal motor function (Knee-Mouth test) scores ranging from 2 (incomplete performance) to 4 (mild clumsiness) were recruited. Nine healthy participant with a SIAS score of 5 (normal) also participated. Participants were asked to grasp a plastic glass and repetitively move it from the lap to the mouth and back at a conformable speed for 30 s, during which the hand movement was measured using OPTOTRAK. The position data was numerically differentiated and the three-dimensional curvature was computed. To compare against a previously proposed measure, the mean squared jerk normalized by its minimum value was computed. Age-matched healthy participants were instructed to move the glass at three different movement speeds. Results There was an inverse relationship between the curvature of the movement trajectory and the patient's SIAS score. The median of the -log of curvature (MedianLC) correlated well with the SIAS score, upper extremity subsection of Fugl-Meyer Assessment, and the jerk measure in the paretic arm. When the healthy participants moved slowly, the increase in the jerk measure was comparable to the paretic movements with a SIAS score of 2 to 4, while the MedianLC was distinguishable from paretic movements. Conclusions Measurement based on curvature was able to quantify movement irregularities and matched well with the examiner's observations. The results suggest that the quality of paretic movements is well characterized using spatial smoothness represented by curvature. The smaller computational costs associated with this measurement suggest that this method has potential clinical utility. PMID:22040326
Simulating closed- and open-loop voluntary movement: a nonlinear control-systems approach.
Davidson, Paul R; Jones, Richard D; Andreae, John H; Sirisena, Harsha R
2002-11-01
In many recent human motor control models, including feedback-error learning and adaptive model theory (AMT), feedback control is used to correct errors while an inverse model is simultaneously tuned to provide accurate feedforward control. This popular and appealing hypothesis, based on a combination of psychophysical observations and engineering considerations, predicts that once the tuning of the inverse model is complete the role of feedback control is limited to the correction of disturbances. This hypothesis was tested by looking at the open-loop behavior of the human motor system during adaptation. An experiment was carried out involving 20 normal adult subjects who learned a novel visuomotor relationship on a pursuit tracking task with a steering wheel for input. During learning, the response cursor was periodically blanked, removing all feedback about the external system (i.e., about the relationship between hand motion and response cursor motion). Open-loop behavior was not consistent with a progressive transfer from closed- to open-loop control. Our recently developed computational model of the brain--a novel nonlinear implementation of AMT--was able to reproduce the observed closed- and open-loop results. In contrast, other control-systems models exhibited only minimal feedback control following adaptation, leading to incorrect open-loop behavior. This is because our model continues to use feedback to control slow movements after adaptation is complete. This behavior enhances the internal stability of the inverse model. In summary, our computational model is currently the only motor control model able to accurately simulate the closed- and open-loop characteristics of the experimental response trajectories.
NASA Astrophysics Data System (ADS)
Handford, Matthew L.; Srinivasan, Manoj
2016-02-01
Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost - even lower than assuming that the non-amputee’s ankle torques are cost-free.
The Use of Census Migration Data to Approximate Human Movement Patterns across Temporal Scales
Wesolowski, Amy; Buckee, Caroline O.; Pindolia, Deepa K.; Eagle, Nathan; Smith, David L.; Garcia, Andres J.; Tatem, Andrew J.
2013-01-01
Human movement plays a key role in economies and development, the delivery of services, and the spread of infectious diseases. However, it remains poorly quantified partly because reliable data are often lacking, particularly for low-income countries. The most widely available are migration data from human population censuses, which provide valuable information on relatively long timescale relocations across countries, but do not capture the shorter-scale patterns, trips less than a year, that make up the bulk of human movement. Census-derived migration data may provide valuable proxies for shorter-term movements however, as substantial migration between regions can be indicative of well connected places exhibiting high levels of movement at finer time scales, but this has never been examined in detail. Here, an extensive mobile phone usage data set for Kenya was processed to extract movements between counties in 2009 on weekly, monthly, and annual time scales and compared to data on change in residence from the national census conducted during the same time period. We find that the relative ordering across Kenyan counties for incoming, outgoing and between-county movements shows strong correlations. Moreover, the distributions of trip durations from both sources of data are similar, and a spatial interaction model fit to the data reveals the relationships of different parameters over a range of movement time scales. Significant relationships between census migration data and fine temporal scale movement patterns exist, and results suggest that census data can be used to approximate certain features of movement patterns across multiple temporal scales, extending the utility of census-derived migration data. PMID:23326367
Impact of elicited mood on movement expressivity during a fitness task.
Giraud, Tom; Focone, Florian; Isableu, Brice; Martin, Jean-Claude; Demulier, Virginie
2016-10-01
The purpose of the present study was to evaluate the impact of four mood conditions (control, positive, negative, aroused) on movement expressivity during a fitness task. Motion capture data from twenty individuals were recorded as they performed a predefined motion sequence. Moods were elicited using task-specific scenarii to keep a valid context. Movement qualities inspired by Effort-Shape framework (Laban & Ullmann, 1971) were computed (i.e., Impulsiveness, Energy, Directness, Jerkiness and Expansiveness). A reduced number of computed features from each movement quality was selected via Principal Component Analyses. Analyses of variance and Generalized Linear Mixed Models were used to identify movement characteristics discriminating the four mood conditions. The aroused mood condition was strongly associated with increased mean Energy compared to the three other conditions. The positive and negative mood conditions showed more subtle differences interpreted as a result of their moderate activation level. Positive mood was associated with more impulsive movements and negative mood was associated with more tense movements (i.e., reduced variability and increased Jerkiness). Findings evidence the key role of movement qualities in capturing motion signatures of moods and highlight the importance of task context in their interpretations. Copyright © 2016 Elsevier B.V. All rights reserved.
Spontaneous Movements of a Computer Mouse Reveal Egoism and In-group Favoritism.
Maliszewski, Norbert; Wojciechowski, Łukasz; Suszek, Hubert
2017-01-01
The purpose of the project was to assess whether the first spontaneous movements of a computer mouse, when making an assessment on a scale presented on the screen, may express a respondent's implicit attitudes. In Study 1, the altruistic behaviors of 66 students were assessed. The students were led to believe that the task they were performing was also being performed by another person and they were asked to distribute earnings between themselves and the partner. The participants performed the tasks under conditions with and without distractors. With the distractors, in the first few seconds spontaneous mouse movements on the scale expressed a selfish distribution of money, while later the movements gravitated toward more altruism. In Study 2, 77 Polish students evaluated a painting by a Polish/Jewish painter on a scale. They evaluated it under conditions of full or distracted cognitive abilities. Spontaneous movements of the mouse on the scale were analyzed. In addition, implicit attitudes toward both Poles and Jews were measured with the Implicit Association Test (IAT). A significant association between implicit attitudes (IAT) and spontaneous evaluation of images using a computer mouse was observed in the group with the distractor. The participants with strong implicit in-group favoritism of Poles revealed stronger preference for the Polish painter's work in the first few seconds of mouse movement. Taken together, these results suggest that spontaneous mouse movements may reveal egoism (in-group favoritism), i.e., processes that were not observed in the participants' final decisions (clicking on the scale).
Comparison of visual sensitivity to human and object motion in autism spectrum disorder.
Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie
2010-08-01
Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.
Computational Models Reveal a Passive Mechanism for Cell Migration in the Crypt
Dunn, Sara-Jane; Näthke, Inke S.; Osborne, James M.
2013-01-01
Cell migration in the intestinal crypt is essential for the regular renewal of the epithelium, and the continued upward movement of cells is a key characteristic of healthy crypt dynamics. However, the driving force behind this migration is unknown. Possibilities include mitotic pressure, active movement driven by motility cues, or negative pressure arising from cell loss at the crypt collar. It is possible that a combination of factors together coordinate migration. Here, three different computational models are used to provide insight into the mechanisms that underpin cell movement in the crypt, by examining the consequence of eliminating cell division on cell movement. Computational simulations agree with existing experimental results, confirming that migration can continue in the absence of mitosis. Importantly, however, simulations allow us to infer mechanisms that are sufficient to generate cell movement, which is not possible through experimental observation alone. The results produced by the three models agree and suggest that cell loss due to apoptosis and extrusion at the crypt collar relieves cell compression below, allowing cells to expand and move upwards. This finding suggests that future experiments should focus on the role of apoptosis and cell extrusion in controlling cell migration in the crypt. PMID:24260407
Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars
2015-10-01
A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (p<0.05). This study showed high test-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Han, Zhuyang; To, Gin Nam Sze; Fu, Sau Chung; Chao, Christopher Yu-Hang; Weng, Wenguo; Huang, Quanyi
2014-08-06
Airborne transmission of respiratory infectious disease in indoor environment (e.g. airplane cabin, conference room, hospital, isolated room and inpatient ward) may cause outbreaks of infectious diseases, which may lead to many infection cases and significantly influences on the public health. This issue has received more and more attentions from academics. This work investigates the influence of human movement on the airborne transmission of respiratory infectious diseases in an airplane cabin by using an accurate human model in numerical simulation and comparing the influences of different human movement behaviors on disease transmission. The Eulerian-Lagrangian approach is adopted to simulate the dispersion and deposition of the expiratory aerosols. The dose-response model is used to assess the infection risks of the occupants. The likelihood analysis is performed as a hypothesis test on the input parameters and different human movement pattern assumptions. An in-flight SARS outbreak case is used for investigation. A moving person with different moving speeds is simulated to represent the movement behaviors. A digital human model was used to represent the detailed profile of the occupants, which was obtained by scanning a real thermal manikin using the 3D laser scanning system. The analysis results indicate that human movement can strengthen the downward transport of the aerosols, significantly reduce the overall deposition and removal rate of the suspended aerosols and increase the average infection risk in the cabin. The likelihood estimation result shows that the risk assessment results better fit the outcome of the outbreak case when the movements of the seated passengers are considered. The intake fraction of the moving person is significantly higher than most of the seated passengers. The infection risk distribution in the airplane cabin highly depends on the movement behaviors of the passengers and the index patient. The walking activities of the crew members and the seated passengers can significantly increase their personal infection risks. Taking the influence of the movement of the seated passengers and the index patient into consideration is necessary and important. For future studies, investigations on the behaviors characteristics of the passengers during flight will be useful and helpful for infection control.
Emotor control: computations underlying bodily resource allocation, emotions, and confidence.
Kepecs, Adam; Mensh, Brett D
2015-12-01
Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience-approaching subjective behavior as the result of mental computations instantiated in the brain-to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This "emotor" control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on "confidence." Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior.
Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris
2012-01-01
Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639
On the Characterization of Revisitation Patterns in Complex Human Dynamics - A Data Science Approach
NASA Astrophysics Data System (ADS)
Barbosa Filho, Hugo Serrano
When it comes to visitation patterns, humans beings are extremely regular and predictable, with recurrent activities responsible for most of our movements. In recent years, we have seen scientists attempt to model and explain human dynamics and in particular human movement. Akin to other human behaviors, traveling patterns evolve from the convolution between internal and external factors. A better understanding on the mechanisms responsible for transforming and incorporating individual events into regular patterns is of fundamental importance. Many aspects of our complex lives are affected by human movements such as disease spread and epidemics modeling, city planning, wireless network development, and disaster relief, to name a few. Given the myriad of applications, it is clear that a complete understanding of how people move in space can lead to considerable benefits to our society. In most of the recent works, scientists have focused on the idea that people movements are biased towards frequently-visited locations. According to them, human movement is based on a exploration/exploitation dichotomy in which individuals choose new locations (exploration) or return to frequently-visited locations (exploitation). In this dissertation we present some of our contributions to the field, such as the presence of a recency effect in human mobility and Web browsing behaviors as well as the Returner vs. Explorers dichotomy in Web browsing trajectories.
Kniep, Rüdiger; Zahn, Dirk; Wulfes, Jana
2017-01-01
We explored the functional role of individual otoconia within the otolith system of mammalians responsible for the detection of linear accelerations and head tilts in relation to the gravity vector. Details of the inner structure and the shape of intact human and artificial otoconia were studied using environmental scanning electron microscopy (ESEM), including decalcification by ethylenediaminetetraacetic acid (EDTA) to discriminate local calcium carbonate density. Considerable differences between the rhombohedral faces of human and artificial otoconia already indicate that the inner architecture of otoconia is not consistent with the point group -3m. This is clearly confirmed by decalcified otoconia specimen which are characterized by a non-centrosymmetric volume distribution of the compact 3+3 branches. This structural evidence for asymmetric mass distribution was further supported by light microscopy in combination with a high speed camera showing the movement of single otoconia specimen (artificial specimen) under gravitational influence within a viscous medium (artificial endolymph). Moreover, the response of otoconia to linear acceleration forces was investigated by particle dynamics simulations. Both, time-resolved microscopy and computer simulations of otoconia acceleration show that the dislocation of otoconia include significant rotational movement stemming from density asymmetry. Based on these findings, we suggest an otolith membrane expansion/stiffening mechanism for enhanced response to linear acceleration transmitted to the vestibular hair cells. PMID:28406968
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Implementing Artificial Intelligence Behaviors in a Virtual World
NASA Technical Reports Server (NTRS)
Krisler, Brian; Thome, Michael
2012-01-01
In this paper, we will present a look at the current state of the art in human-computer interface technologies, including intelligent interactive agents, natural speech interaction and gestural based interfaces. We describe our use of these technologies to implement a cost effective, immersive experience on a public region in Second Life. We provision our Artificial Agents as a German Shepherd Dog avatar with an external rules engine controlling the behavior and movement. To interact with the avatar, we implemented a natural language and gesture system allowing the human avatars to use speech and physical gestures rather than interacting via a keyboard and mouse. The result is a system that allows multiple humans to interact naturally with AI avatars by playing games such as fetch with a flying disk and even practicing obedience exercises using voice and gesture, a natural seeming day in the park.
Control of a visual keyboard using an electrocorticographic brain-computer interface.
Krusienski, Dean J; Shih, Jerry J
2011-05-01
Brain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG. A total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard. This is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.
Performance monitoring for brain-computer-interface actions.
Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf
2017-02-01
When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.
Bashford, Luke; Mehring, Carsten
2016-01-01
To study body ownership and control, illusions that elicit these feelings in non-body objects are widely used. Classically introduced with the Rubber Hand Illusion, these illusions have been replicated more recently in virtual reality and by using brain-computer interfaces. Traditionally these illusions investigate the replacement of a body part by an artificial counterpart, however as brain-computer interface research develops it offers us the possibility to explore the case where non-body objects are controlled in addition to movements of our own limbs. Therefore we propose a new illusion designed to test the feeling of ownership and control of an independent supernumerary hand. Subjects are under the impression they control a virtual reality hand via a brain-computer interface, but in reality there is no causal connection between brain activity and virtual hand movement but correct movements are observed with 80% probability. These imitation brain-computer interface trials are interspersed with movements in both the subjects' real hands, which are in view throughout the experiment. We show that subjects develop strong feelings of ownership and control over the third hand, despite only receiving visual feedback with no causal link to the actual brain signals. Our illusion is crucially different from previously reported studies as we demonstrate independent ownership and control of the third hand without loss of ownership in the real hands.
Multipulse control of saccadic eye movements
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
We present three conclusions regarding the neural control of saccadic eye movements, resulting from comparisons between recorded movements and computer simulations. The controller signal to the muscles is probably a multipulse-step. This kind of signal drives the fastest model trajectories. Finally, multipulse signals explain differences between model and electrophysiological results.
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour
Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed. PMID:27415814
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.
Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non-human animal behaviour science. Further improvements and validation are needed, and future applications and limitations are discussed.
Movement Contributes to Infants' Recognition of the Human Form
ERIC Educational Resources Information Center
Christie, Tamara; Slaughter, Virginia
2010-01-01
Three experiments demonstrate that biological movement facilitates young infants' recognition of the whole human form. A body discrimination task was used in which 6-, 9-, and 12-month-old infants were habituated to typical human bodies and then shown scrambled human bodies at the test. Recovery of interest to the scrambled bodies was observed in…
Multiscale model for pedestrian and infection dynamics during air travel
NASA Astrophysics Data System (ADS)
Namilae, Sirish; Derjany, Pierrot; Mubayi, Anuj; Scotch, Mathew; Srinivasan, Ashok
2017-05-01
In this paper we develop a multiscale model combining social-force-based pedestrian movement with a population level stochastic infection transmission dynamics framework. The model is then applied to study the infection transmission within airplanes and the transmission of the Ebola virus through casual contacts. Drastic limitations on air-travel during epidemics, such as during the 2014 Ebola outbreak in West Africa, carry considerable economic and human costs. We use the computational model to evaluate the effects of passenger movement within airplanes and air-travel policies on the geospatial spread of infectious diseases. We find that boarding policy by an airline is more critical for infection propagation compared to deplaning policy. Enplaning in two sections resulted in fewer infections than the currently followed strategy with multiple zones. In addition, we found that small commercial airplanes are better than larger ones at reducing the number of new infections in a flight. Aggregated results indicate that passenger movement strategies and airplane size predicted through these network models can have significant impact on an event like the 2014 Ebola epidemic. The methodology developed here is generic and can be readily modified to incorporate the impact from the outbreak of other directly transmitted infectious diseases.
Using eye tracking to identify faking attempts during penile plethysmography assessment.
Trottier, Dominique; Rouleau, Joanne-Lucine; Renaud, Patrice; Goyette, Mathieu
2014-01-01
Penile plethysmography (PPG) is considered the most rigorous method for sexual interest assessment. Nevertheless, it is subject to faking attempts by participants, which compromises the internal validity of the instrument. To date, various attempts have been made to limit voluntary control of sexual response during PPG assessments, without satisfactory results. This exploratory research examined eye-tracking technologies' ability to identify the presence of cognitive strategies responsible for erectile inhibition during PPG assessment. Eye movements and penile responses for 20 subjects were recorded while exploring animated human-like computer-generated stimuli in a virtual environment under three distinct viewing conditions: (a) the free visual exploration of a preferred sexual stimulus without erectile inhibition; (b) the viewing of a preferred sexual stimulus with erectile inhibition; and (c) the free visual exploration of a non-preferred sexual stimulus. Results suggest that attempts to control erectile responses generate specific eye-movement variations, characterized by a general deceleration of the exploration process and limited exploration of the erogenous zone. Findings indicate that recording eye movements can provide significant information on the presence of competing covert processes responsible for erectile inhibition. The use of eye-tracking technologies during PPG could therefore lead to improved internal validity of the plethysmographic procedure.
Understanding Human Mobility from Twitter
Jurdak, Raja; Zhao, Kun; Liu, Jiajun; AbouJaoude, Maurice; Cameron, Mark; Newth, David
2015-01-01
Understanding human mobility is crucial for a broad range of applications from disease prediction to communication networks. Most efforts on studying human mobility have so far used private and low resolution data, such as call data records. Here, we propose Twitter as a proxy for human mobility, as it relies on publicly available data and provides high resolution positioning when users opt to geotag their tweets with their current location. We analyse a Twitter dataset with more than six million geotagged tweets posted in Australia, and we demonstrate that Twitter can be a reliable source for studying human mobility patterns. Our analysis shows that geotagged tweets can capture rich features of human mobility, such as the diversity of movement orbits among individuals and of movements within and between cities. We also find that short- and long-distance movers both spend most of their time in large metropolitan areas, in contrast with intermediate-distance movers’ movements, reflecting the impact of different modes of travel. Our study provides solid evidence that Twitter can indeed be a useful proxy for tracking and predicting human movement. PMID:26154597
The destination defines the journey: an examination of the kinematics of hand-to-mouth movements
Gonzalez, Claudia L. R.
2016-01-01
Long-train electrical stimulation of the motor and premotor cortices of nonhuman primates can produce either hand-to-mouth or grasp-to-inspect movements, depending on the precise location of stimulation. Furthermore, single-neuron recording studies identify discrete neuronal populations in the inferior parietal and ventral premotor cortices that respond uniquely to either grasp-to-eat or grasp-to-place movements, despite their identical mechanistic requirements. These studies demonstrate that the macaque motor cortex is organized around producing functional, goal-oriented movements, rather than simply fulfilling muscular prerequisites of action. In humans, right-handed hand-to-mouth movements have a unique kinematic signature; smaller maximum grip apertures are produced when grasping to eat than when grasping to place identical targets. This is evidence that the motor cortex in humans is also organized around producing functional movements. However, in both macaques and humans, grasp-to-eat/hand-to-mouth movements have always been elicited using edible targets and have (necessarily) been paired with mouth movement. It is therefore unknown whether the kinematic distinction is a natural result of grasping food and/or is simply attributable to concurrent opening of the mouth while grasping. In experiment 1, we used goal-differentiated grasping tasks, directed toward edible and inedible targets, to show that the unique kinematic signature is present even with inedible targets. In experiment 2, we used the same goal-differentiated grasping tasks, either coupled with or divorced from an open-mouth movement, to show that the signature is not attributable merely to a planned opening of the mouth during the grasp. These results are discussed in relation to the role of hand-to-mouth movements in human development, independently of grasp-to-eat behavior. PMID:27512020
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
Extending self-organizing particle systems to problem solving.
Rodríguez, Alejandro; Reggia, James A
2004-01-01
Self-organizing particle systems consist of numerous autonomous, purely reflexive agents ("particles") whose collective movements through space are determined primarily by local influences they exert upon one another. Inspired by biological phenomena (bird flocking, fish schooling, etc.), particle systems have been used not only for biological modeling, but also increasingly for applications requiring the simulation of collective movements such as computer-generated animation. In this research, we take some first steps in extending particle systems so that they not only move collectively, but also solve simple problems. This is done by giving the individual particles (agents) a rudimentary intelligence in the form of a very limited memory and a top-down, goal-directed control mechanism that, triggered by appropriate conditions, switches them between different behavioral states and thus different movement dynamics. Such enhanced particle systems are shown to be able to function effectively in performing simulated search-and-collect tasks. Further, computational experiments show that collectively moving agent teams are more effective than similar but independently moving ones in carrying out such tasks, and that agent teams of either type that split off members of the collective to protect previously acquired resources are most effective. This work shows that the reflexive agents of contemporary particle systems can readily be extended to support goal-directed problem solving while retaining their collective movement behaviors. These results may prove useful not only for future modeling of animal behavior, but also in computer animation, coordinated movement control in robotic teams, particle swarm optimization, and computer games.
Odden, Morten; Athreya, Vidya; Rattan, Sandeep; Linnell, John D C
2014-01-01
Understanding the nature of the interactions between humans and wildlife is of vital importance for conflict mitigation. We equipped five leopards with GPS-collars in Maharashtra (4) and Himachal Pradesh (1), India, to study movement patterns in human-dominated landscapes outside protected areas. An adult male and an adult female were both translocated 52 km, and exhibited extensive, and directional, post release movements (straight line movements: male = 89 km in 37 days, female = 45 km in 5 months), until they settled in home ranges of 42 km2 (male) and 65 km2 (female). The three other leopards, two adult females and a young male were released close to their capture sites and used small home ranges of 8 km2 (male), 11 km2 and 15 km2 (females). Movement patterns were markedly nocturnal, with hourly step lengths averaging 339±9.5 m (SE) during night and 60±4.1 m during day, and night locations were significantly closer to human settlements than day locations. However, more nocturnal movements were observed among those three living in the areas with high human population densities. These visited houses regularly at nighttime (20% of locations <25 m from houses), but rarely during day (<1%). One leopard living in a sparsely populated area avoided human settlements both day and night. The small home ranges of the leopards indicate that anthropogenic food resources may be plentiful although wild prey is absent. The study provides clear insights into the ability of leopards to live and move in landscapes that are extremely modified by human activity.
Two Archetypes of Motor Control Research.
Latash, Mark L
2010-07-01
This reply to the Commentaries is focused on two archetypes of motor control research, one based on physics and physiology and the other based on control theory and ideas of neural computations. The former approach, represented by the equilibrium-point hypothesis, strives to discover the physical laws and salient physiological variables that make purposeful coordinated movements possible. The latter approach, represented by the ideas of internal models and optimal control, tries to apply methods of control developed for man-made inanimate systems to the human body. Specific issues related to control with subthreshold membrane depolarization, motor redundancy, and the idea of synergies are briefly discussed.
Alexander, C J
1994-01-01
OBJECTIVE--To determine whether an arboreal lifestyle required full use of movement ranges underutilised in nine joint groups in humans, because under-utilisation of available movement range may be associated with susceptibility to primary osteoarthritis. METHODS--Utilisation of the nine joint groups was studied in two species of primate exercising in a simulated arboreal environment, using 'focal animal' observation techniques supplemented by telephoto photography and by review of archival material from other sources. Fifteen apes were observed over a total observation period of 20.2 man-hours and 152 films were analysed for utilisation of movement range. RESULTS--With one exception, all the movement ranges reported to be under-utilised in humans were fully utilised by the apes in climbing activities. The exception, metacarpophalangeal extension, was an essential component of the chimpanzee ground progression mode of knuckle walking. CONCLUSIONS--The underused movement range in several human joints is explicable as residual capacity from a semiarboreal lifestyle. If the correlation with primary osteoarthritis is confirmed, it suggests that the disease may reflect a disparity between inherited capacity and current need. The significance of the result lies in its implication that primary osteoarthritis may be preventable. Images PMID:7826133
Alexander, C J
1994-11-01
To determine whether an arboreal lifestyle required full use of movement ranges underutilised in nine joint groups in humans, because under-utilisation of available movement range may be associated with susceptibility to primary osteoarthritis. Utilisation of the nine joint groups was studied in two species of primate exercising in a simulated arboreal environment, using 'focal animal' observation techniques supplemented by telephoto photography and by review of archival material from other sources. Fifteen apes were observed over a total observation period of 20.2 man-hours and 152 films were analysed for utilisation of movement range. With one exception, all the movement ranges reported to be under-utilised in humans were fully utilised by the apes in climbing activities. The exception, metacarpophalangeal extension, was an essential component of the chimpanzee ground progression mode of knuckle walking. The underused movement range in several human joints is explicable as residual capacity from a semiarboreal lifestyle. If the correlation with primary osteoarthritis is confirmed, it suggests that the disease may reflect a disparity between inherited capacity and current need. The significance of the result lies in its implication that primary osteoarthritis may be preventable.
NASA Astrophysics Data System (ADS)
Schieber, Marc H.
2016-07-01
Control of the human hand has been both difficult to understand scientifically and difficult to emulate technologically. The article by Santello and colleagues in the current issue of Physics of Life Reviews[1] highlights the accelerating pace of interaction between the neuroscience of controlling body movement and the engineering of robotic hands that can be used either autonomously or as part of a motor neuroprosthesis, an artificial body part that moves under control from a human subject's own nervous system. Motor neuroprostheses typically involve a brain-computer interface (BCI) that takes signals from the subject's nervous system or muscles, interprets those signals through a decoding algorithm, and then applies the resulting output to control the artificial device.
Whole-Body Human Inverse Dynamics with Distributed Micro-Accelerometers, Gyros and Force Sensing †
Latella, Claudia; Kuppuswamy, Naveen; Romano, Francesco; Traversaro, Silvio; Nori, Francesco
2016-01-01
Human motion tracking is a powerful tool used in a large range of applications that require human movement analysis. Although it is a well-established technique, its main limitation is the lack of estimation of real-time kinetics information such as forces and torques during the motion capture. In this paper, we present a novel approach for a human soft wearable force tracking for the simultaneous estimation of whole-body forces along with the motion. The early stage of our framework encompasses traditional passive marker based methods, inertial and contact force sensor modalities and harnesses a probabilistic computational technique for estimating dynamic quantities, originally proposed in the domain of humanoid robot control. We present experimental analysis on subjects performing a two degrees-of-freedom bowing task, and we estimate the motion and kinetics quantities. The results demonstrate the validity of the proposed method. We discuss the possible use of this technique in the design of a novel soft wearable force tracking device and its potential applications. PMID:27213394
Niekerk, Sjan-Mari van; Louw, Quinette Abigail; Grimmer-Sommers, Karen
2014-01-01
Dynamic movement whilst sitting is advocated as a way to reduce musculoskeletal symptoms from seated activities. Conventionally, in ergonomics research, only a 'snapshot' of static sitting posture is captured, which does not provide information on the number or type of movements over a period of time. A novel approach to analyse the number of postural changes whist sitting was employed in order to describe the sitting behaviour of adolescents whilst undertaking computing activities. A repeated-measures observational study was conducted. A total of 12 high school students were randomly selected from a conveniently selected school. Fifteen minutes of 3D posture measurements were recorded to determine the number of postural changes whilst using computers. Data of 11 students were able to be analysed. Large intra-subject variation of the median and IQR was observed, indicating frequent postural changes whilst sitting. Better understanding of usual dynamic postural movements whilst sitting will provide new insights into causes of musculoskeletal symptoms experienced by computer users.
Bayesian exploration for intelligent identification of textures.
Fishel, Jeremy A; Loeb, Gerald E
2012-01-01
In order to endow robots with human-like abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control, and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac), we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness). Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of prior experience to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median = 5) and yielded a 95.4% success rate. The method of Bayesian exploration developed and tested in this paper may generalize well to other cognitive problems.
Bayesian Exploration for Intelligent Identification of Textures
Fishel, Jeremy A.; Loeb, Gerald E.
2012-01-01
In order to endow robots with human-like abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control, and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac), we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness). Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of prior experience to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median = 5) and yielded a 95.4% success rate. The method of Bayesian exploration developed and tested in this paper may generalize well to other cognitive problems. PMID:22783186
Joint Drumming: Social Context Facilitates Synchronization in Preschool Children
ERIC Educational Resources Information Center
Kirschner, Sebastian; Tomasello, Michael
2009-01-01
The human capacity to synchronize body movements to an external acoustic beat enables uniquely human behaviors such as music making and dancing. By hypothesis, these first evolved in human cultures as fundamentally social activities. We therefore hypothesized that children would spontaneously synchronize their body movements to an external beat at…
Augmented reality for biomedical wellness sensor systems
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Szu, Harold
2013-05-01
Due to the commercial move and gaming industries, Augmented Reality (AR) technology has matured. By definition of AR, both artificial and real humans can be simultaneously present and realistically interact among one another. With the help of physics and physiology, we can build in the AR tool together with real human day-night webcam inputs through a simple interaction of heat transfer -getting hot, action and reaction -walking or falling, as well as the physiology -sweating due to activity. Knowing the person age, weight and 3D coordinates of joints in the body, we deduce the force, the torque, and the energy expenditure during real human movements and apply to an AR human model. We wish to support the physics-physiology AR version, PPAR, as a BMW surveillance tool for senior home alone (SHA). The functionality is to record senior walking and hand movements inside a home environment. Besides the fringe benefit of enabling more visits from grand children through AR video games, the PP-AR surveillance tool may serve as a means to screen patients in the home for potential falls at points around in house. Moreover, we anticipate PP-AR may help analyze the behavior history of SHA, e.g. enhancing the Smartphone SHA Ubiquitous Care Program, by discovering early symptoms of candidate Alzheimer-like midnight excursions, or Parkinson-like trembling motion for when performing challenging muscular joint movements. Using a set of coordinates corresponding to a set of 3D positions representing human joint locations, we compute the Kinetic Energy (KE) generated by each body segment over time. The Work is then calculated, and converted into calories. Using common graphics rendering pipelines, one could invoke AR technology to provide more information about patients to caretakers. Alerts to caretakers can be prompted by a patient's departure from their personal baseline, and the patient's time ordered joint information can be loaded to a graphics viewer allowing for high-definition digital reconstruction. Then an entire scene can be viewed from any position in virtual space, and AR can display certain measurements values which either constituted an alert, or otherwise indicate signs of the transition from wellness to illness.
Spontaneous Movements of a Computer Mouse Reveal Egoism and In-group Favoritism
Maliszewski, Norbert; Wojciechowski, Łukasz; Suszek, Hubert
2017-01-01
The purpose of the project was to assess whether the first spontaneous movements of a computer mouse, when making an assessment on a scale presented on the screen, may express a respondent’s implicit attitudes. In Study 1, the altruistic behaviors of 66 students were assessed. The students were led to believe that the task they were performing was also being performed by another person and they were asked to distribute earnings between themselves and the partner. The participants performed the tasks under conditions with and without distractors. With the distractors, in the first few seconds spontaneous mouse movements on the scale expressed a selfish distribution of money, while later the movements gravitated toward more altruism. In Study 2, 77 Polish students evaluated a painting by a Polish/Jewish painter on a scale. They evaluated it under conditions of full or distracted cognitive abilities. Spontaneous movements of the mouse on the scale were analyzed. In addition, implicit attitudes toward both Poles and Jews were measured with the Implicit Association Test (IAT). A significant association between implicit attitudes (IAT) and spontaneous evaluation of images using a computer mouse was observed in the group with the distractor. The participants with strong implicit in-group favoritism of Poles revealed stronger preference for the Polish painter’s work in the first few seconds of mouse movement. Taken together, these results suggest that spontaneous mouse movements may reveal egoism (in-group favoritism), i.e., processes that were not observed in the participants’ final decisions (clicking on the scale). PMID:28163689
Chan, W H; Chan, Alan H S
2003-01-01
This experiment studied strength and reversibility of direction-of-motion stereotypes and response times for different configurations of circular displays and rotary knobs. The effect of pointer position, instruction of turn direction, and control plane on movement compatibility was analyzed with precise quantitative measures of strength and reversibility index of stereotype. A comparison of results was made between a Computer Simulated Test and a Hardware Test with real rotary controls. There was consensus in the results of the two tests that strong and significantly reversible clockwise-for-clockwise (CC) and anticlockwise-for-anticlockwise (AA) stereotypes were obtained at the 12 o'clock position. Subjects' response times were found to be generally longer when there were no clear movement stereotypes. Nevertheless, differences of results were observed that while the CC and AA preferences were found to be dominant and reversible at all the planes and pointer positions in the Hardware Test, there was variation in the strength and reversibility of the two stereotypes amongst different testing configurations in the Simulated Test. This phenomenon was explained by the operating of the clockwise-for-right and anticlockwise-for-left principles, as shown in the analysis of contributions of component principles to the overall stereotype. The differences of results from the two tests were discussed with regard to simulation fidelity and it was suggested that a real Hardware Test should be used whenever possible for determination of design parameters of control panels in consideration of movement compatibility. Based on the Hardware Test, a pointer is recommended to be positioned at 12 o'clock position for check reading or resetting purpose, and the frontal plane is the best plane for positioning a rotary control with circular display. The results of this study provided significant implications for the industrial design of control panels used in man-machine interfaces for improved human performance.
Measuring sperm movement within the female reproductive tract using Fourier analysis.
Nicovich, Philip R; Macartney, Erin L; Whan, Renee M; Crean, Angela J
2015-02-01
The adaptive significance of variation in sperm phenotype is still largely unknown, in part due to the difficulties of observing and measuring sperm movement in its natural, selective environment (i.e., within the female reproductive tract). Computer-assisted sperm analysis systems allow objective and accurate measurement of sperm velocity, but rely on being able to track individual sperm, and are therefore unable to measure sperm movement in species where sperm move in trains or bundles. Here we describe a newly developed computational method for measuring sperm movement using Fourier analysis to estimate sperm tail beat frequency. High-speed time-lapse videos of sperm movement within the female tract of the neriid fly Telostylinus angusticollis were recorded, and a map of beat frequencies generated by converting the periodic signal of an intensity versus time trace at each pixel to the frequency domain using the Fourier transform. We were able to detect small decreases in sperm tail beat frequency over time, indicating the method is sensitive enough to identify consistent differences in sperm movement. Fourier analysis can be applied to a wide range of species and contexts, and should therefore facilitate novel exploration of the causes and consequences of variation in sperm movement.
Review-Research on the physical training model of human body based on HQ.
Junjie, Liu
2016-11-01
Health quotient (HQ) is the newest health culture and concept in the 21st century, and the analysis of the human body sports model is not enough mature at present, what's more, the purpose of this paper is to study the integration of the two subjects the health quotient and the sport model. This paper draws the conclusion that physical training and education in colleges and universities can improve the health quotient, and it will make students possess a more healthy body and mind. Then through a new rigid body model of sports to simulate the human physical exercise. After that this paper has an in-depth study on the dynamic model of the human body movement on the basis of establishing the matrix and equation. The simulation results of the human body bicycle riding and pole throwing show that the human body joint movement simulation can be realized and it has a certain operability as well. By means of such simulated calculation, we can come to a conclusion that the movement of the ankle joint, knee joint and hip joint's motion law and real motion are basically the same. So it further verify the accuracy of the motion model, which lay the foundation of other research movement model, also, the study of the movement model is an important method in the study of human health in the future.
Perge, János A.; Zhang, Shaomin; Malik, Wasim Q.; Homer, Mark L.; Cash, Sydney; Friehs, Gerhard; Eskandar, Emad N.; Donoghue, John P.; Hochberg, Leigh R.
2014-01-01
Objective Action potentials and local field potentials (LFPs) recorded in primary motor cortex contain information about the direction of movement. LFPs are assumed to be more robust to signal instabilities than action potentials, which makes LFPs along with action potentials a promising signal source for brain-computer interface applications. Still, relatively little research has directly compared the utility of LFPs to action potentials in decoding movement direction in human motor cortex. Approach We conducted intracortical multielectrode recordings in motor cortex of two persons (T2 and [S3]) as they performed a motor imagery task. We then compared the offline decoding performance of LFPs and spiking extracted from the same data recorded across a one-year period in each participant. Main results We obtained offline prediction accuracy of movement direction and endpoint velocity in multiple LFP bands, with the best performance in the highest (200–400Hz) LFP frequency band, presumably also containing low-pass filtered action potentials. Cross-frequency correlations of preferred directions and directional modulation index showed high similarity of directional information between action potential firing rates (spiking) and high frequency LFPs (70–400Hz), and increasing disparity with lower frequency bands (0–7, 10–40 and 50–65Hz). Spikes predicted the direction of intended movement more accurately than any individual LFP band, however combined decoding of all LFPs was statistically indistinguishable from spike based performance. As the quality of spiking signals (i.e. signal amplitude) and the number of significantly modulated spiking units decreased, the offline decoding performance decreased 3.6[5.65]%/month (for T2 and [S3] respectively). The decrease in the number of significantly modulated LFP signals and their decoding accuracy followed a similar trend (2.4[2.85]%/month, ANCOVA, p=0.27[0.03]). Significance Field potentials provided comparable offline decoding performance to unsorted spikes. Thus, LFPs may provide useful external device control using current human intracortical recording technology. (Clinical trial registration number: NCT00912041) PMID:24921388
Latash, M L; Gutman, S R
1994-01-01
Until now, the equilibrium-point hypothesis (lambda model) of motor control has assumed nonintersecting force-length characteristics of the tonic stretch reflex for individual muscles. Limited data from animal experiments suggest, however, that such intersections may occur. We have assumed the possibility of intersection of the characteristics of the tonic stretch reflex and performed a computer simulation of movement trajectories and electromyographic patterns. The simulation has demonstrated, in particular, that a transient change in the slope of the characteristic of an agonist muscle may lead to temporary movement reversals, hesitations, oscillations, and multiple electromyographic bursts that are typical of movements of patients with dystonia. The movement patterns of three patients with idiopathic dystonia during attempts at fast single-joint movements (in the elbow, wrist, and ankle) were recorded and compared with the results of the computer simulation. This approach considers that motor disorders in dystonia result from faulty control patterns that may not correlate with any morphological or neurophysiological changes. It provides a basis for the high variability of dystonic movements. The uniqueness of abnormal motor patterns in dystonia, that precludes statistical analysis across patients, may result from subtle differences in the patterns of intersecting characteristics of the tonic stretch reflex. The applicability of our analysis to disordered multijoint movement patterns is discussed.
Jerky spontaneous movements at term age in preterm infants who later developed cerebral palsy.
Kanemaru, Nao; Watanabe, Hama; Kihara, Hideki; Nakano, Hisako; Nakamura, Tomohiko; Nakano, Junji; Taga, Gentaro; Konishi, Yukuo
2014-08-01
Assessment of spontaneous movements in infants has been a powerful predictor of cerebral palsy (CP). Recent advancements on computer-based video analysis can provide detailed information about the properties of spontaneous movements. The aim of this study was to investigate the relationship between spontaneous movements of the 4 limbs at term age and the development of CP at 3 years of age by using a computer-based video analysis system. We analyzed video recordings of spontaneous movements at 36-44 weeks postmenstrual age (PMA) for 145 preterm infants who were born preterm (22-36 weeks PMA with birthweights of 460-1498g). Sixteen of the infants developed CP by 3 years of age, while 129 developed normally. We compared 6 movement indices calculated from 2-dimensional trajectories of all limbs between the 2 groups. We found that the indices of jerkiness were higher in the CP group than in the normal group (p<0.1 for arms and p<0.01 for legs). No decline was observed in the average velocity and number of movement units in the CP group compared with to the normal group. Jerkiness of spontaneous movements at term age provides additional information for predicting CP in infants born preterm. Copyright © 2014 Elsevier Ltd. All rights reserved.
Destabilizing effects of visual environment motions simulating eye movements or head movements
NASA Technical Reports Server (NTRS)
White, Keith D.; Shuman, D.; Krantz, J. H.; Woods, C. B.; Kuntz, L. A.
1991-01-01
In the present paper, we explore effects on the human of exposure to a visual virtual environment which has been enslaved to simulate the human user's head movements or eye movements. Specifically, we have studied the capacity of our experimental subjects to maintain stable spatial orientation in the context of moving their entire visible surroundings by using the parameters of the subjects' natural movements. Our index of the subjects' spatial orientation was the extent of involuntary sways of the body while attempting to stand still, as measured by translations and rotations of the head. We also observed, informally, their symptoms of motion sickness.
Respiratory effort energy estimation using Doppler radar.
Shahhaidar, Ehsaneh; Yavari, Ehsan; Young, Jared; Boric-Lubecke, Olga; Stickley, Cris
2012-01-01
Human respiratory effort can be harvested to power wearable biosensors and mobile electronic devices. The very first step toward designing a harvester is to estimate available energy and power. This paper describes an estimation of the available power and energy due to the movements of the torso during breathing, using Doppler radar by detecting breathing rate, torso displacement, torso movement velocity and acceleration along the sagittal movement of the torso. The accuracy of the detected variables is verified by two reference methods. The experimental result obtained from a healthy female human subject shows that the available power from circumferential movement can be higher than the power from the sagittal movement.
Di Dio, Cinzia; Ardizzi, Martina; Massaro, Davide; Di Cesare, Giuseppe; Gilli, Gabriella; Marchetti, Antonella; Gallese, Vittorio
2016-01-01
Movement perception and its role in aesthetic experience have been often studied, within empirical aesthetics, in relation to the human body. No such specificity has been defined in neuroimaging studies with respect to contents lacking a human form. The aim of this work was to explore, through functional magnetic imaging (f MRI), how perceived movement is processed during the aesthetic judgment of paintings using two types of content: human subjects and scenes of nature. Participants, untutored in the arts, were shown the stimuli and asked to make aesthetic judgments. Additionally, they were instructed to observe the paintings and to rate their perceived movement in separate blocks. Observation highlighted spontaneous processes associated with aesthetic experience, whereas movement judgment outlined activations specifically related to movement processing. The ratings recorded during aesthetic judgment revealed that nature scenes received higher scored than human content paintings. The imaging data showed similar activation, relative to baseline, for all stimuli in the three tasks, including activation of occipito-temporal areas, posterior parietal, and premotor cortices. Contrast analyses within aesthetic judgment task showed that human content activated, relative to nature, precuneus, fusiform gyrus, and posterior temporal areas, whose activation was prominent for dynamic human paintings. In contrast, nature scenes activated, relative to human stimuli, occipital and posterior parietal cortex/precuneus, involved in visuospatial exploration and pragmatic coding of movement, as well as central insula. Static nature paintings further activated, relative to dynamic nature stimuli, central and posterior insula. Besides insular activation, which was specific for aesthetic judgment, we found a large overlap in the activation pattern characterizing each stimulus dimension (content and dynamism) across observation, aesthetic judgment, and movement judgment tasks. These findings support the idea that the aesthetic evaluation of artworks depicting both human subjects and nature scenes involves a motor component, and that the associated neural processes occur quite spontaneously in the viewer. Furthermore, considering the functional roles of posterior and central insula, we suggest that nature paintings may evoke aesthetic processes requiring an additional proprioceptive and sensori-motor component implemented by “motor accessibility” to the represented scenario, which is needed to judge the aesthetic value of the observed painting. PMID:26793087
Do Curved Reaching Movements Emerge from Competing Perceptions? A Reply to van der Wel et al. (2009)
ERIC Educational Resources Information Center
Spivey, Michael J.; Dale, Rick; Knoblich, Guenther; Grosjean, Marc
2010-01-01
Spivey, Grosjean, and Knoblich (2005) reported smoothly curved reaching movements, via computer-mouse tracking, which suggested a continuously evolving flow of distributed lexical activation patterns into motor movement during a phonological competitor task. For example, when instructed to click the "candy," participants' mouse-cursor trajectories…
Data Movement Dominates: Advanced Memory Technology to Address the Real Exascale Power Problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Keren
Energy is the fundamental barrier to Exascale supercomputing and is dominated by the cost of moving data from one point to another, not computation. Similarly, performance is dominated by data movement, not computation. The solution to this problem requires three critical technologies: 3D integration, optical chip-to-chip communication, and a new communication model. The central goal of the Sandia led "Data Movement Dominates" project aimed to develop memory systems and new architectures based on these technologies that have the potential to lower the cost of local memory accesses by orders of magnitude and provide substantially more bandwidth. Only through these transformationalmore » advances can future systems reach the goals of Exascale computing with a manageable power budgets. The Sandia led team included co-PIs from Columbia University, Lawrence Berkeley Lab, and the University of Maryland. The Columbia effort of Data Movement Dominates focused on developing a physically accurate simulation environment and experimental verification for optically-connected memory (OCM) systems that can enable continued performance scaling through high-bandwidth capacity, energy-efficient bit-rate transparency, and time-of-flight latency. With OCM, memory device parallelism and total capacity can scale to match future high-performance computing requirements without sacrificing data-movement efficiency. When we consider systems with integrated photonics, links to memory can be seamlessly integrated with the interconnection network-in a sense, memory becomes a primary aspect of the interconnection network. At the core of the Columbia effort, toward expanding our understanding of OCM enabled computing we have created an integrated modeling and simulation environment that uniquely integrates the physical behavior of the optical layer. The PhoenxSim suite of design and software tools developed under this effort has enabled the co-design of and performance evaluation photonics-enabled OCM architectures on Exascale computing systems.« less
Adde, Lars; Helbostad, Jorunn; Jensenius, Alexander R; Langaas, Mette; Støen, Ragnhild
2013-08-01
This study evaluates the role of postterm age at assessment and the use of one or two video recordings for the detection of fidgety movements (FMs) and prediction of cerebral palsy (CP) using computer vision software. Recordings between 9 and 17 weeks postterm age from 52 preterm and term infants (24 boys, 28 girls; 26 born preterm) were used. Recordings were analyzed using computer vision software. Movement variables, derived from differences between subsequent video frames, were used for quantitative analysis. Sensitivities, specificities, and area under curve were estimated for the first and second recording, or a mean of both. FMs were classified based on the Prechtl approach of general movement assessment. CP status was reported at 2 years. Nine children developed CP of whom all recordings had absent FMs. The mean variability of the centroid of motion (CSD) from two recordings was more accurate than using only one recording, and identified all children who were diagnosed with CP at 2 years. Age at assessment did not influence the detection of FMs or prediction of CP. The accuracy of computer vision techniques in identifying FMs and predicting CP based on two recordings should be confirmed in future studies.
Significance of vestibular and proprioceptive afferentation in the regulation of human posture
NASA Technical Reports Server (NTRS)
Gurfinkel, V. S.
1980-01-01
Viewpoints on the vertical human posture and the relation between postural adaptation during voluntary movements and the guarantee of stable locomotor movements are examined. Various complex sensory systems are discussed.
Gugliellmelli, Eugenio; Micera, Silvestro; Migliavacca, Francesco; Pedotti, Antonio
2015-01-01
In Italy, biomechanics research and the analysis of human and animal movement have had a very long history, beginning with the exceptional pioneering work of Leonardo da Vinci. In 1489, da Vinci began investigating human anatomy, including an examination of human tendons, muscles, and the skeletal system. He continued this line of inquiry later in life, identifying what he called "the four powers--movement, weight, force, and percussion"--and how he thought they worked in the human body. His approach, by the way, was very modern--analyzing nature through anatomy, developing models for interpretation, and transferring this knowledge to bio-inspired machines.
Minho Won; Albalawi, Hassan; Xin Li; Thomas, Donald E
2014-01-01
This paper describes a low-power hardware implementation for movement decoding of brain computer interface. Our proposed hardware design is facilitated by two novel ideas: (i) an efficient feature extraction method based on reduced-resolution discrete cosine transform (DCT), and (ii) a new hardware architecture of dual look-up table to perform discrete cosine transform without explicit multiplication. The proposed hardware implementation has been validated for movement decoding of electrocorticography (ECoG) signal by using a Xilinx FPGA Zynq-7000 board. It achieves more than 56× energy reduction over a reference design using band-pass filters for feature extraction.
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1994-01-01
In microgravity conditions mobility is greatly enhanced and body stability is difficult to achieve. Because of these difficulties, optimum placement and accessibility of objects and controls can be critical to required tasks on board shuttle flights or on the proposed space station. Anthropometric measurement of the maximum reach of occupants of a microgravity environment provide knowledge about maximum functional placement for tasking situations. Calculations for a full body, functional reach envelope for microgravity environments are imperative. To this end, three dimensional computer modeled human figures, providing a method of anthropometric measurement, were used to locate the data points that define the full body, functional reach envelope. Virtual reality technology was utilized to enable an occupant of the microgravity environment to experience movement within the reach envelope while immersed in a simulated microgravity environment.
Estimating the mutual information of an EEG-based Brain-Computer Interface.
Schlögl, A; Neuper, C; Pfurtscheller, G
2002-01-01
An EEG-based Brain-Computer Interface (BCI) could be used as an additional communication channel between human thoughts and the environment. The efficacy of such a BCI depends mainly on the transmitted information rate. Shannon's communication theory was used to quantify the information rate of BCI data. For this purpose, experimental EEG data from four BCI experiments was analyzed off-line. Subjects imaginated left and right hand movements during EEG recording from the sensorimotor area. Adaptive autoregressive (AAR) parameters were used as features of single trial EEG and classified with linear discriminant analysis. The intra-trial variation as well as the inter-trial variability, the signal-to-noise ratio, the entropy of information, and the information rate were estimated. The entropy difference was used as a measure of the separability of two classes of EEG patterns.
Using distributed partial memories to improve self-organizing collective movements.
Winder, Ransom; Reggia, James A
2004-08-01
Past self-organizing models of collectively moving "particles" (simulated bird flocks, fish schools, etc.) have typically been based on purely reflexive agents that have no significant memory of past movements. We hypothesized that giving such individual particles a limited distributed memory of past obstacles they encountered could lead to significantly faster travel between goal destinations. Systematic computational experiments using six terrains that had different arrangements of obstacles demonstrated that, at least in some domains, this conjecture is true. Furthermore, these experiments demonstrated that improved performance over time came not only from the avoidance of previously seen obstacles, but also (surprisingly) immediately after first encountering obstacles due to decreased delays in circumventing those obstacles. Simulations also showed that, of the four strategies we tested for removal of remembered obstacles when memory was full and a new obstacle was to be saved, none was better than random selection. These results may be useful in interpreting future experimental research on group movements in biological populations, and in improving existing methodologies for control of collective movements in computer graphics, robotic teams, particle swarm optimization, and computer games.
Guger, C; Schlögl, A; Walterspacher, D; Pfurtscheller, G
1999-01-01
An EEG-based brain-computer interface (BCI) is a direct connection between the human brain and the computer. Such a communication system is needed by patients with severe motor impairments (e.g. late stage of Amyotrophic Lateral Sclerosis) and has to operate in real-time. This paper describes the selection of the appropriate components to construct such a BCI and focuses also on the selection of a suitable programming language and operating system. The multichannel system runs under Windows 95, equipped with a real-time Kernel expansion to obtain reasonable real-time operations on a standard PC. Matlab controls the data acquisition and the presentation of the experimental paradigm, while Simulink is used to calculate the recursive least square (RLS) algorithm that describes the current state of the EEG in real-time. First results of the new low-cost BCI show that the accuracy of differentiating imagination of left and right hand movement is around 95%.
Adaptive estimation of hand movement trajectory in an EEG based brain-computer interface system
NASA Astrophysics Data System (ADS)
Robinson, Neethu; Guan, Cuntai; Vinod, A. P.
2015-12-01
Objective. The various parameters that define a hand movement such as its trajectory, speed, etc, are encoded in distinct brain activities. Decoding this information from neurophysiological recordings is a less explored area of brain-computer interface (BCI) research. Applying non-invasive recordings such as electroencephalography (EEG) for decoding makes the problem more challenging, as the encoding is assumed to be deep within the brain and not easily accessible by scalp recordings. Approach. EEG based BCI systems can be developed to identify the neural features underlying movement parameters that can be further utilized to provide a detailed and well defined control command set to a BCI output device. A real-time continuous control is better suited for practical BCI systems, and can be achieved by continuous adaptive reconstruction of movement trajectory than discrete brain activity classifications. In this work, we adaptively reconstruct/estimate the parameters of two-dimensional hand movement trajectory, namely movement speed and position, from multi-channel EEG recordings. The data for analysis is collected by performing an experiment that involved center-out right-hand movement tasks in four different directions at two different speeds in random order. We estimate movement trajectory using a Kalman filter that models the relation between brain activity and recorded parameters based on a set of defined predictors. We propose a method to define these predictor variables that includes spatial, spectral and temporally localized neural information and to select optimally informative variables. Main results. The proposed method yielded correlation of (0.60 ± 0.07) between recorded and estimated data. Further, incorporating the proposed predictor subset selection, the correlation achieved is (0.57 ± 0.07, p {\\lt }0.004) with significant gain in stability of the system, as well as dramatic reduction in number of predictors (76%) for the savings of computational time. Significance. The proposed system provides a real time movement control system using EEG-BCI with control over movement speed and position. These results are higher and statistically significant compared to existing techniques in EEG based systems and thus promise the applicability of the proposed method for efficient estimation of movement parameters and for continuous motor control.
Advances in graphonomics: studies on fine motor control, its development and disorders.
Van Gemmert, Arend W A; Teulings, Hans-Leo
2006-10-01
During the past 20 years graphonomic research has become a major contributor to the understanding of human movement science. Graphonomic research investigates the relationship between the planning and generation of fine motor tasks, in particular, handwriting and drawing. Scientists in this field are at the forefront of using new paradigms to investigate human movement. The 16 articles in this special issue of Human Movement Science show that the field of graphonomics makes an important contribution to the understanding of fine motor control, motor development, and movement disorders. Topics discussed include writer's cramp, multiple sclerosis, Parkinson's disease, schizophrenia, drug-induced parkinsonism, dopamine depletion, dysgraphia, motor development, developmental coordination disorder, caffeine, alertness, arousal, sleep deprivation, visual feedback transformation and suppression, eye-hand coordination, pen grip, pen pressure, movement fluency, bimanual interference, dominant versus non-dominant hand, tracing, freehand drawing, spiral drawing, reading, typewriting, and automatic segmentation.
Modeling of human movement monitoring using Bluetooth Low Energy technology.
Mokhtari, G; Zhang, Q; Karunanithi, M
2015-01-01
Bluetooth Low Energy (BLE) is a wireless communication technology which can be used to monitor human movements. In this monitoring system, a BLE signal scanner scans signal strength of BLE tags carried by people, to thus infer human movement patterns within its monitoring zone. However to the extent of our knowledge one main aspect of this monitoring system which has not yet been thoroughly investigated in literature is how to build a sound theoretical model, based on tunable BLE communication parameters such as scanning time interval and advertising time interval, to enable the study and design of effective and efficient movement monitoring systems. In this paper, we proposed and developed a statistical model based on Monte-Carlo simulation, which can be utilized to assess impacts of BLE technology parameters in terms of latency and efficiency, on a movement monitoring system, and can thus benefit a more efficient system design.
McClure, Meredith L; Dickson, Brett G; Nicholson, Kerry L
2017-06-01
This study sought to identify critical areas for puma ( Puma concolor ) movement across the state of Arizona in the American Southwest and to identify those most likely to be impacted by current and future human land uses, particularly expanding urban development and associated increases in traffic volume. Human populations in this region are expanding rapidly, with the potential for urban centers and busy roads to increasingly act as barriers to demographic and genetic connectivity of large-bodied, wide-ranging carnivores such as pumas, whose long-distance movements are likely to bring them into contact with human land uses and whose low tolerance both for and from humans may put them at risk unless opportunities for safe passage through or around human-modified landscapes are present. Brownian bridge movement models based on global positioning system collar data collected during bouts of active movement and linear mixed models were used to model habitat quality for puma movement; then, a wall-to-wall application of circuit theory models was used to produce a continuous statewide estimate of connectivity for puma movement and to identify pinch points, or bottlenecks, that may be most at risk of impacts from current and future traffic volume and expanding development. Rugged, shrub- and scrub-dominated regions were highlighted as those offering high quality movement habitat for pumas, and pinch points with the greatest potential impacts from expanding development and traffic, although widely distributed, were particularly prominent to the north and east of the city of Phoenix and along interstate highways in the western portion of the state. These pinch points likely constitute important conservation opportunities, where barriers to movement may cause disproportionate loss of connectivity, but also where actions such as placement of wildlife crossing structures or conservation easements could enhance connectivity and prevent detrimental impacts before they occur.
Stochl, Jan; Croudace, Tim
2013-01-01
Why some humans prefer to rotate clockwise rather than anticlockwise is not well understood. This study aims to identify the predictors of the preferred rotation direction in humans. The variables hypothesised to influence rotation preference include handedness, footedness, sex, brain hemisphere lateralisation, and the Coriolis effect (which results from geospatial location on the Earth). An online questionnaire allowed us to analyse data from 1526 respondents in 97 countries. Factor analysis showed that the direction of rotation should be studied separately for local and global movements. Handedness, footedness, and the item hypothesised to measure brain hemisphere lateralisation are predictors of rotation direction for both global and local movements. Sex is a predictor of the direction of global rotation movements but not local ones, and both sexes tend to rotate clockwise. Geospatial location does not predict the preferred direction of rotation. Our study confirms previous findings concerning the influence of handedness, footedness, and sex on human rotation; our study also provides new insight into the underlying structure of human rotation movements and excludes the Coriolis effect as a predictor of rotation.
A Tale of Many Cities: Universal Patterns in Human Urban Mobility
Noulas, Anastasios; Scellato, Salvatore; Lambiotte, Renaud; Pontil, Massimiliano; Mascolo, Cecilia
2012-01-01
The advent of geographic online social networks such as Foursquare, where users voluntarily signal their current location, opens the door to powerful studies on human movement. In particular the fine granularity of the location data, with GPS accuracy down to 10 meters, and the worldwide scale of Foursquare adoption are unprecedented. In this paper we study urban mobility patterns of people in several metropolitan cities around the globe by analyzing a large set of Foursquare users. Surprisingly, while there are variations in human movement in different cities, our analysis shows that those are predominantly due to different distributions of places across different urban environments. Moreover, a universal law for human mobility is identified, which isolates as a key component the rank-distance, factoring in the number of places between origin and destination, rather than pure physical distance, as considered in some previous works. Building on our findings, we also show how a rank-based movement model accurately captures real human movements in different cities. PMID:22666339
Human cortical activity related to unilateral movements. A high resolution EEG study.
Urbano, A; Babiloni, C; Onorati, P; Babiloni, F
1996-12-20
In the present study a modern high resolution electroencephalography (EEG) technique was used to investigate the dynamic functional topography of human cortical activity related to simple unilateral internally triggered finger movements. The sensorimotor area (M1-S1) contralateral to the movement as well as the supplementary motor area (SMA) and to a lesser extent the ipsilateral M1-S1 were active during the preparation and execution of these movements. These findings suggest that both hemispheres may cooperate in both planning and production of simple unilateral volitional acts.
The effect of model uncertainty on cooperation in sensorimotor interactions
Grau-Moya, J.; Hez, E.; Pezzulo, G.; Braun, D. A.
2013-01-01
Decision-makers have been shown to rely on probabilistic models for perception and action. However, these models can be incorrect or partially wrong in which case the decision-maker has to cope with model uncertainty. Model uncertainty has recently also been shown to be an important determinant of sensorimotor behaviour in humans that can lead to risk-sensitive deviations from Bayes optimal behaviour towards worst-case or best-case outcomes. Here, we investigate the effect of model uncertainty on cooperation in sensorimotor interactions similar to the stag-hunt game, where players develop models about the other player and decide between a pay-off-dominant cooperative solution and a risk-dominant, non-cooperative solution. In simulations, we show that players who allow for optimistic deviations from their opponent model are much more likely to converge to cooperative outcomes. We also implemented this agent model in a virtual reality environment, and let human subjects play against a virtual player. In this game, subjects' pay-offs were experienced as forces opposing their movements. During the experiment, we manipulated the risk sensitivity of the computer player and observed human responses. We found not only that humans adaptively changed their level of cooperation depending on the risk sensitivity of the computer player but also that their initial play exhibited characteristic risk-sensitive biases. Our results suggest that model uncertainty is an important determinant of cooperation in two-player sensorimotor interactions. PMID:23945266
Connors, Brenda L.; Rende, Richard; Colton, Timothy J.
2014-01-01
The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns. PMID:24999336
Dynamic Neural Correlates of Motor Error Monitoring and Adaptation during Trial-to-Trial Learning
Tan, Huiling; Jenkinson, Ned
2014-01-01
A basic EEG feature upon voluntary movements in healthy human subjects is a β (13–30 Hz) band desynchronization followed by a postmovement event-related synchronization (ERS) over contralateral sensorimotor cortex. The functional implications of these changes remain unclear. We hypothesized that, because β ERS follows movement, it may reflect the degree of error in that movement, and the salience of that error to the task at hand. As such, the signal might underpin trial-to-trial modifications of the internal model that informs future movements. To test this hypothesis, EEG was recorded in healthy subjects while they moved a joystick-controlled cursor to visual targets on a computer screen, with different rotational perturbations applied between the joystick and cursor. We observed consistently lower β ERS in trials with large error, even when other possible motor confounds, such as reaction time, movement duration, and path length, were controlled, regardless of whether the perturbation was random or constant. There was a negative trial-to-trial correlation between the size of the absolute initial angular error and the amplitude of the β ERS, and this negative correlation was enhanced when other contextual information about the behavioral salience of the angular error, namely, the bias and variance of errors in previous trials, was additionally considered. These same features also had an impact on the behavioral performance. The findings suggest that the β ERS reflects neural processes that evaluate motor error and do so in the context of the prior history of errors. PMID:24741058
Pursuit tracks chase: exploring the role of eye movements in the detection of chasing
Träuble, Birgit
2015-01-01
We explore the role of eye movements in a chase detection task. Unlike the previous studies, which focused on overall performance as indicated by response speed and chase detection accuracy, we decompose the search process into gaze events such as smooth eye movements and use a data-driven approach to separately describe these gaze events. We measured eye movements of four human subjects engaged in a chase detection task displayed on a computer screen. The subjects were asked to detect two chasing rings among twelve other randomly moving rings. Using principal component analysis and support vector machines, we looked at the template and classification images that describe various stages of the detection process. We showed that the subjects mostly search for pairs of rings that move one after another in the same direction with a distance of 3.5–3.8 degrees. To find such pairs, the subjects first looked for regions with a high ring density and then pursued the rings in this region. Most of these groups consisted of two rings. Three subjects preferred to pursue the pair as a single object, while the remaining subject pursued the group by alternating the gaze between the two individual rings. In the discussion, we argue that subjects do not compare the movement of the pursued pair to a singular preformed template that describes a chasing motion. Rather, subjects bring certain hypotheses about what motion may qualify as chase and then, through feedback, they learn to look for a motion pattern that maximizes their performance. PMID:26401454
Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.
Aprasoff, Jonathan; Donchin, Opher
2012-04-01
Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.
Connors, Brenda L; Rende, Richard; Colton, Timothy J
2014-01-01
The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic - the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts - and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns.
Reaction Time Correlations during Eye–Hand Coordination:Behavior and Modeling
Dean, Heather L.; Martí, Daniel; Tsui, Eva; Rinzel, John; Pesaran, Bijan
2011-01-01
During coordinated eye– hand movements, saccade reaction times (SRTs) and reach reaction times (RRTs) are correlated in humans and monkeys. Reaction times (RTs) measure the degree of movement preparation and can correlate with movement speed and accuracy. However, RTs can also reflect effector nonspecific influences, such as motivation and arousal. We use a combination of behavioral psychophysics and computational modeling to identify plausible mechanisms for correlations in SRTs and RRTs. To disambiguate nonspecific mechanisms from mechanisms specific to movement coordination, we introduce a dual-task paradigm in which a reach and a saccade are cued with a stimulus onset asynchrony (SOA). We then develop several variants of integrate-to-threshold models of RT, which postulate that responses are initiated when the neural activity encoding effector-specific movement preparation reaches a threshold. The integrator models formalize hypotheses about RT correlations and make predictions for how each RT should vary with SOA. To test these hypotheses, we trained three monkeys to perform the eye– hand SOA task and analyzed their SRTs and RRTs. In all three subjects, RT correlations decreased with increasing SOA duration. Additionally, mean SRT decreased with decreasing SOA, revealing facilitation of saccades with simultaneous reaches, as predicted by the model. These results are not consistent with the predictions of the models with common modulation or common input but are compatible with the predictions of a model with mutual excitation between two effector-specific integrators. We propose that RT correlations are not simply attributable to motivation and arousal and are a signature of coordination. PMID:21325507
Professionalizing a Global Social Movement: Universities and Human Rights
ERIC Educational Resources Information Center
Suarez, David; Bromley, Patricia
2012-01-01
Research on the human rights movement emphasizes direct changes in nation-states, focusing on the efficacy of treaties and the role of advocacy in mitigating immediate violations. However, more than 140 universities in 59 countries established academic chairs, research centers, and programs for human rights from 1968-2000, a development that…
Altered corticospinal function during movement preparation in humans with spinal cord injury.
Federico, Paolo; Perez, Monica A
2017-01-01
In uninjured humans, transmission in the corticospinal pathway changes in a task-dependent manner during movement preparation. We investigated whether this ability is preserved in humans with incomplete chronic cervical spinal cord injury (SCI). Our results show that corticospinal excitability is altered in the preparatory phase of an upcoming movement when there is a need to suppress but not to execute rapid index finger voluntary contractions in individuals with SCI compared with controls. This is probably related to impaired transmission at a cortical and spinal level after SCI. Overall our findings indicate that deficits in corticospinal transmission in humans with chronic incomplete SCI are also present in the preparatory phase of upcoming movements. Corticospinal output is modulated in a task-dependent manner during the preparatory phase of upcoming movements in humans. Whether this ability is preserved after spinal cord injury (SCI) is unknown. In this study, we examined motor evoked potentials elicited by cortical (MEPs) and subcortical (CMEPs) stimulation of corticospinal axons and short-interval intracortical inhibition in the first dorsal interosseous muscle in the preparatory phase of a reaction time task where individuals with chronic incomplete cervical SCI and age-matched controls needed to suppress (NOGO) or initiate (GO) ballistic index finger isometric voluntary contractions. Reaction times were prolonged in SCI participants compared with control subjects and stimulation was provided ∼90 ms prior to movement onset in each group. During NOGO trials, both MEPs and CMEPs remained unchanged compared to baseline in SCI participants but were suppressed in control subjects. Notably, during GO trials, MEPs increased to a similar extent in both groups but CMEPs increased only in controls. The magnitude of short-interval intracortical inhibition increased in controls but not in SCI subjects during NOGO trials and decreased in both groups in GO trials. These novel observations reveal that humans with incomplete cervical SCI have an altered ability to modulate corticospinal excitability during movement preparation when there is a need to suppress but not to execute upcoming rapid finger movements, which is probably related to impaired transmission at a cortical and spinal level. Thus, deficits in corticospinal transmission after human SCI extend to the preparatory phase of upcoming movements. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Binocular eye movement control and motion perception: what is being tracked?
van der Steen, Johannes; Dits, Joyce
2012-10-19
We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.
Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
van der Steen, Johannes; Dits, Joyce
2012-01-01
Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286
Inducing any virtual two-dimensional movement in humans by applying muscle tendon vibration.
Roll, Jean-Pierre; Albert, Frédéric; Thyrion, Chloé; Ribot-Ciscar, Edith; Bergenheim, Mikael; Mattei, Benjamin
2009-02-01
In humans, tendon vibration evokes illusory sensation of movement. We developed a model mimicking the muscle afferent patterns corresponding to any two-dimensional movement and checked its validity by inducing writing illusory movements through specific sets of muscle vibrators. Three kinds of illusory movements were compared. The first was induced by vibration patterns copying the responses of muscle spindle afferents previously recorded by microneurography during imposed ankle movements. The two others were generated by the model. Sixteen different vibratory patterns were applied to 20 motionless volunteers in the absence of vision. After each vibration sequence, the participants were asked to name the corresponding graphic symbol and then to reproduce the illusory movement perceived. Results showed that the afferent patterns generated by the model were very similar to those recorded microneurographically during actual ankle movements (r=0.82). The model was also very efficient for generating afferent response patterns at the wrist level, if the preferred sensory directions of the wrist muscle groups were first specified. Using recorded and modeled proprioceptive patterns to pilot sets of vibrators placed at the ankle or wrist levels evoked similar illusory movements, which were correctly identified by the participants in three quarters of the trials. Our proprioceptive model, based on neurosensory data recorded in behaving humans, should then be a useful tool in fields of research such as sensorimotor learning, rehabilitation, and virtual reality.
Turnbull, David
2011-01-01
Bacteria, pigs, rats, pots, plants, words, bones, stones, earrings, diseases, and genetic indicators of all varieties are markers and proxies for the complexity of interweaving trails and stories integral to understanding human movement and knowledge assemblage in Southeast Asia and around the world. Understanding human movement and knowledge assemblage is central to comprehending the genetic basis of disease, especially of a cancer like nasopharyngeal carcinoma. The problem is that the markers and trails, taken in isolation, do not all tell the same story. Human movement and knowledge assemblage are in constant interaction in an adaptive process of co-production with genes, terrain, climate, sea level changes, kinship relations, diet, materials, food and transport technologies, social and cognitive technologies, and knowledge strategies and transmission. Nasopharyngeal carcinoma is the outcome of an adaptive process involving physical, social, and genetic components. PMID:21272440
Searching for Survivors through Random Human-Body Movement Outdoors by Continuous-Wave Radar Array
Liu, Miao; Li, Zhao; Liang, Fulai; Jing, Xijing; Lu, Guohua; Wang, Jianqi
2016-01-01
It is a major challenge to search for survivors after chemical or nuclear leakage or explosions. At present, biological radar can be used to achieve this goal by detecting the survivor’s respiration signal. However, owing to the random posture of an injured person at a rescue site, the radar wave may directly irradiate the person’s head or feet, in which it is difficult to detect the respiration signal. This paper describes a multichannel-based antenna array technology, which forms an omnidirectional detection system via 24-GHz Doppler biological radar, to address the random positioning relative to the antenna of an object to be detected. Furthermore, since the survivors often have random body movement such as struggling and twitching, the slight movements of the body caused by breathing are obscured by these movements. Therefore, a method is proposed to identify random human-body movement by utilizing multichannel information to calculate the background variance of the environment in combination with a constant-false-alarm-rate detector. The conducted outdoor experiments indicate that the system can realize the omnidirectional detection of random human-body movement and distinguish body movement from environmental interference such as movement of leaves and grass. The methods proposed in this paper will be a promising way to search for survivors outdoors. PMID:27073860
Searching for Survivors through Random Human-Body Movement Outdoors by Continuous-Wave Radar Array.
Li, Chuantao; Chen, Fuming; Qi, Fugui; Liu, Miao; Li, Zhao; Liang, Fulai; Jing, Xijing; Lu, Guohua; Wang, Jianqi
2016-01-01
It is a major challenge to search for survivors after chemical or nuclear leakage or explosions. At present, biological radar can be used to achieve this goal by detecting the survivor's respiration signal. However, owing to the random posture of an injured person at a rescue site, the radar wave may directly irradiate the person's head or feet, in which it is difficult to detect the respiration signal. This paper describes a multichannel-based antenna array technology, which forms an omnidirectional detection system via 24-GHz Doppler biological radar, to address the random positioning relative to the antenna of an object to be detected. Furthermore, since the survivors often have random body movement such as struggling and twitching, the slight movements of the body caused by breathing are obscured by these movements. Therefore, a method is proposed to identify random human-body movement by utilizing multichannel information to calculate the background variance of the environment in combination with a constant-false-alarm-rate detector. The conducted outdoor experiments indicate that the system can realize the omnidirectional detection of random human-body movement and distinguish body movement from environmental interference such as movement of leaves and grass. The methods proposed in this paper will be a promising way to search for survivors outdoors.
Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator
NASA Astrophysics Data System (ADS)
Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi
Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.
Odden, Morten; Athreya, Vidya; Rattan, Sandeep; Linnell, John D. C.
2014-01-01
Understanding the nature of the interactions between humans and wildlife is of vital importance for conflict mitigation. We equipped five leopards with GPS-collars in Maharashtra (4) and Himachal Pradesh (1), India, to study movement patterns in human-dominated landscapes outside protected areas. An adult male and an adult female were both translocated 52 km, and exhibited extensive, and directional, post release movements (straight line movements: male = 89 km in 37 days, female = 45 km in 5 months), until they settled in home ranges of 42 km2 (male) and 65 km2 (female). The three other leopards, two adult females and a young male were released close to their capture sites and used small home ranges of 8 km2 (male), 11 km2 and 15 km2 (females). Movement patterns were markedly nocturnal, with hourly step lengths averaging 339±9.5 m (SE) during night and 60±4.1 m during day, and night locations were significantly closer to human settlements than day locations. However, more nocturnal movements were observed among those three living in the areas with high human population densities. These visited houses regularly at nighttime (20% of locations <25 m from houses), but rarely during day (<1%). One leopard living in a sparsely populated area avoided human settlements both day and night. The small home ranges of the leopards indicate that anthropogenic food resources may be plentiful although wild prey is absent. The study provides clear insights into the ability of leopards to live and move in landscapes that are extremely modified by human activity. PMID:25390067
Matveev, V M; Dzaraev, Ch R; Persin, L S
2007-01-01
60 children with different types occlusion--normal, distal, mesial and transverse between the ages 7-15 years were selected. Using unique computer software programme and 3D digitizer MicroScribe-G2 (Company Immersion, USA) on the articulator with mounted casts, persuaded measuring the scores in different movements of mandible habitual occlusion, maximum forward movement and maximum lateral movements. Scores were calculated and results interpreted.
Foot placement relies on state estimation during visually guided walking.
Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S
2017-02-01
As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.
Monkeys and Humans Share a Common Computation for Face/Voice Integration
Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.
2011-01-01
Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576
Impact of scaling and body movement on contaminant transport in airliner cabins
NASA Astrophysics Data System (ADS)
Mazumdar, Sagnik; Poussou, Stephane B.; Lin, Chao-Hsin; Isukapalli, Sastry S.; Plesniak, Michael W.; Chen, Qingyan
2011-10-01
Studies of contaminant transport have been conducted using small-scale models. This investigation used validated Computational Fluid Dynamics (CFD) to examine if a small-scale water model could reveal the same contaminant transport characteristics as a full-scale airliner cabin. But due to similarity problems and the difficulty of scaling the geometry, a perfect scale up from a small water model to an actual air model was found to be impossible. The study also found that the seats and passengers tended to obstruct the lateral transport of the contaminants and confine their spread to the aisle of the cabin. The movement of a crew member or a passenger could carry a contaminant in its wake to as many rows as the crew member or passenger passed. This could be the reason why a SARS infected passenger could infect fellow passengers who were seated seven rows away. To accurately simulate the contaminant transport, the shape of the moving body should be a human-like model.
Hamas between Violence and Pragmatism
2009-06-01
Islamic Palestinian state. Nonetheless, as a movement, it has another far more existential objective. Once established, a movement needs to sustain...34 (related by al- Bukhari, Moslem, Abu-Dawood and al-Tarmadhi). F. Followers of Other Religions: The Islamic Resistance Movement Is A Humanistic ...Movement: Article Thirty-One: The Islamic Resistance Movement is a humanistic movement. It takes care of human rights and is guided by Islamic
Monie, A P; Price, R I; Lind, C R P; Singer, K P
2015-07-01
The aim of this study is to report the development and validation of a low back computer-aided combined movement examination protocol in normal individuals and record treatment outcomes of cases with symptomatic degenerative lumbar spondylosis. Test-retest, following intervention. Self-report assessments and combined movement examination were used to record composite spinal motion, before and following neurosurgical and pain medicine interventions. 151 normal individuals aged from 20 years to 69 years were assessed using combined movement examination between L1 and S1 spinal levels to establish a reference range. Cases with degenerative low back pain and sciatica were assessed before and after therapeutic interventions with combined movement examination and a battery of self-report pain and disability questionnaires. Change scores for combined movement examination and all outcome measures were derived. Computer-aided combined movement examination validation and intraclass correlation coefficient with 95% confidence interval and least significant change scores indicated acceptable reliability of combined movement examination when recording lumbar movement in normal subjects. In both clinical cases lumbar spine movement restrictions corresponded with self-report scores for pain and disability. Post-intervention outcomes all showed significant improvement, particularly in the most restricted combined movement examination direction. This study provides normative reference data for combined movement examination that may inform future clinical studies of the technique as a convenient objective surrogate for important clinical outcomes in lumbar degenerative spondylosis. It can be used with good reliability, may be well tolerated by individuals in pain and appears to change in concert with validated measures of lumbar spinal pain, functional limitation and quality of life. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
Individual Movement Strategies Revealed through Novel Clustering of Emergent Movement Patterns
NASA Astrophysics Data System (ADS)
Valle, Denis; Cvetojevic, Sreten; Robertson, Ellen P.; Reichert, Brian E.; Hochmair, Hartwig H.; Fletcher, Robert J.
2017-03-01
Understanding movement is critical in several disciplines but analysis methods often neglect key information by adopting each location as sampling unit, rather than each individual. We introduce a novel statistical method that, by focusing on individuals, enables better identification of temporal dynamics of connectivity, traits of individuals that explain emergent movement patterns, and sites that play a critical role in connecting subpopulations. We apply this method to two examples that span movement networks that vary considerably in size and questions: movements of an endangered raptor, the snail kite (Rostrhamus sociabilis plumbeus), and human movement in Florida inferred from Twitter. For snail kites, our method reveals substantial differences in movement strategies for different bird cohorts and temporal changes in connectivity driven by the invasion of an exotic food resource, illustrating the challenge of identifying critical connectivity sites for conservation in the presence of global change. For human movement, our method is able to reliably determine the origin of Florida visitors and identify distinct movement patterns within Florida for visitors from different places, providing near real-time information on the spatial and temporal patterns of tourists. These results emphasize the need to integrate individual variation to generate new insights when modeling movement data.
2015-01-01
The Virtual Teacher paradigm, a version of the Human Dynamic Clamp (HDC), is introduced into studies of learning patterns of inter-personal coordination. Combining mathematical modeling and experimentation, we investigate how the HDC may be used as a Virtual Teacher (VT) to help humans co-produce and internalize new inter-personal coordination pattern(s). Human learners produced rhythmic finger movements whilst observing a computer-driven avatar, animated by dynamic equations stemming from the well-established Haken-Kelso-Bunz (1985) and Schöner-Kelso (1988) models of coordination. We demonstrate that the VT is successful in shifting the pattern co-produced by the VT-human system toward any value (Experiment 1) and that the VT can help humans learn unstable relative phasing patterns (Experiment 2). Using transfer entropy, we find that information flow from one partner to the other increases when VT-human coordination loses stability. This suggests that variable joint performance may actually facilitate interaction, and in the long run learning. VT appears to be a promising tool for exploring basic learning processes involved in social interaction, unraveling the dynamics of information flow between interacting partners, and providing possible rehabilitation opportunities. PMID:26569608
Kostrubiec, Viviane; Dumas, Guillaume; Zanone, Pier-Giorgio; Kelso, J A Scott
2015-01-01
The Virtual Teacher paradigm, a version of the Human Dynamic Clamp (HDC), is introduced into studies of learning patterns of inter-personal coordination. Combining mathematical modeling and experimentation, we investigate how the HDC may be used as a Virtual Teacher (VT) to help humans co-produce and internalize new inter-personal coordination pattern(s). Human learners produced rhythmic finger movements whilst observing a computer-driven avatar, animated by dynamic equations stemming from the well-established Haken-Kelso-Bunz (1985) and Schöner-Kelso (1988) models of coordination. We demonstrate that the VT is successful in shifting the pattern co-produced by the VT-human system toward any value (Experiment 1) and that the VT can help humans learn unstable relative phasing patterns (Experiment 2). Using transfer entropy, we find that information flow from one partner to the other increases when VT-human coordination loses stability. This suggests that variable joint performance may actually facilitate interaction, and in the long run learning. VT appears to be a promising tool for exploring basic learning processes involved in social interaction, unraveling the dynamics of information flow between interacting partners, and providing possible rehabilitation opportunities.
Rationality in Human Movement.
O'Brien, Megan K; Ahmed, Alaa A
2016-01-01
It long has been appreciated that humans behave irrationally in economic decisions under risk: they fail to objectively consider uncertainty, costs, and rewards and instead exhibit risk-seeking or risk-averse behavior. We hypothesize that poor estimates of motor variability (influenced by motor task) and distorted probability weighting (influenced by relevant emotional processes) contribute to characteristic irrationality in human movement decisions.
Automatic Gait Recognition for Human ID at a Distance
2004-11-01
at the modeling and understanding of human movement through image sequences. The ongoing interest in gait in a biometric is in a large part the wider...2.2 Model -Based Approaches...with Canonical Analysis (CA) [11]. At that stage, only one approach had used a model to analyze leg movement [12] as opposed to using human body shape
Posthuman Literacies: Young Children Moving in Time, Place and More-Than-Human Worlds
ERIC Educational Resources Information Center
Hackett, Abigail; Somerville, Margaret
2017-01-01
This paper examines the potential of posthumanism to enable a reconceptualisation of young children's literacies from the starting point of movement and sound in the more-than-human world. We propose movement as a communicative practice that always occurs as a more complex entanglement of relations within more-than-human worlds. Through our…
Complete low-cost implementation of a teleoperated control system for a humanoid robot.
Cela, Andrés; Yebes, J Javier; Arroyo, Roberto; Bergasa, Luis M; Barea, Rafael; López, Elena
2013-01-24
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system.
Hybrid BCI approach to control an artificial tibio-femoral joint.
Mercado, Luis; Rodriguez-Linan, Angel; Torres-Trevino, Luis M; Quiroz, G
2016-08-01
Brain-Computer Interfaces (BCIs) for disabled people should allow them to use their remaining functionalities as control possibilities. BCIs connect the brain with external devices to perform the volition or intent of movement, regardless if that individual is unable to perform the task due to body impairments. In this work we fuse electromyographic (EMG) with electroencephalographic (EEG) activity in a framework called "Hybrid-BCI" (hBCI) approach to control the movement of a simulated tibio-femoral joint. Two mathematical models of a tibio-femoral joint are used to emulate the kinematic and dynamic behavior of the knee. The interest is to reproduce different velocities of the human gait cycle. The EEG signals are used to classify the user intent, which are the velocity changes, meanwhile the superficial EMG signals are used to estimate the amplitude of such intent. A multi-level controller is used to solve the trajectory tracking problem involved. The lower level consists of an individual controller for each model, it solves the tracking of the desired trajectory even considering different velocities of the human gait cycle. The mid-level uses a combination of a logical operator and a finite state machine for the switching between models. Finally, the highest level consists in a support vector machine to classify the desired activity.
Decoding Saccadic Directions Using Epidural ECoG in Non-Human Primates
2017-01-01
A brain-computer interface (BCI) can be used to restore some communication as an alternative interface for patients suffering from locked-in syndrome. However, most BCI systems are based on SSVEP, P300, or motor imagery, and a diversity of BCI protocols would be needed for various types of patients. In this paper, we trained the choice saccade (CS) task in 2 non-human primate monkeys and recorded the brain signal using an epidural electrocorticogram (eECoG) to predict eye movement direction. We successfully predicted the direction of the upcoming eye movement using a support vector machine (SVM) with the brain signals after the directional cue onset and before the saccade execution. The mean accuracies were 80% for 2 directions and 43% for 4 directions. We also quantified the spatial-spectro-temporal contribution ratio using SVM recursive feature elimination (RFE). The channels over the frontal eye field (FEF), supplementary eye field (SEF), and superior parietal lobule (SPL) area were dominantly used for classification. The α-band in the spectral domain and the time bins just after the directional cue onset and just before the saccadic execution were mainly useful for prediction. A saccade based BCI paradigm can be projected in the 2D space, and will hopefully provide an intuitive and convenient communication platform for users. PMID:28665058
Integrating optical finger motion tracking with surface touch events.
MacRitchie, Jennifer; McPherson, Andrew P
2015-01-01
This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction.
Integrating optical finger motion tracking with surface touch events
MacRitchie, Jennifer; McPherson, Andrew P.
2015-01-01
This paper presents a method of integrating two contrasting sensor systems for studying human interaction with a mechanical system, using piano performance as the case study. Piano technique requires both precise small-scale motion of fingers on the key surfaces and planned large-scale movement of the hands and arms. Where studies of performance often focus on one of these scales in isolation, this paper investigates the relationship between them. Two sensor systems were installed on an acoustic grand piano: a monocular high-speed camera tracking the position of painted markers on the hands, and capacitive touch sensors attach to the key surfaces which measure the location of finger-key contacts. This paper highlights a method of fusing the data from these systems, including temporal and spatial alignment, segmentation into notes and automatic fingering annotation. Three case studies demonstrate the utility of the multi-sensor data: analysis of finger flexion or extension based on touch and camera marker location, timing analysis of finger-key contact preceding and following key presses, and characterization of individual finger movements in the transitions between successive key presses. Piano performance is the focus of this paper, but the sensor method could equally apply to other fine motor control scenarios, with applications to human-computer interaction. PMID:26082732
Complete Low-Cost Implementation of a Teleoperated Control System for a Humanoid Robot
Cela, Andrés; Yebes, J. Javier; Arroyo, Roberto; Bergasa, Luis M.; Barea, Rafael; López, Elena
2013-01-01
Humanoid robotics is a field of a great research interest nowadays. This work implements a low-cost teleoperated system to control a humanoid robot, as a first step for further development and study of human motion and walking. A human suit is built, consisting of 8 sensors, 6 resistive linear potentiometers on the lower extremities and 2 digital accelerometers for the arms. The goal is to replicate the suit movements in a small humanoid robot. The data from the sensors is wirelessly transmitted via two ZigBee RF configurable modules installed on each device: the robot and the suit. Replicating the suit movements requires a robot stability control module to prevent falling down while executing different actions involving knees flexion. This is carried out via a feedback control system with an accelerometer placed on the robot's back. The measurement from this sensor is filtered using Kalman. In addition, a two input fuzzy algorithm controlling five servo motors regulates the robot balance. The humanoid robot is controlled by a medium capacity processor and a low computational cost is achieved for executing the different algorithms. Both hardware and software of the system are based on open platforms. The successful experiments carried out validate the implementation of the proposed teleoperated system. PMID:23348029
NASA Astrophysics Data System (ADS)
Kuroki, Hayato; Ino, Shuichi; Nakano, Satoko; Hori, Kotaro; Ifukube, Tohru
The authors of this paper have been studying a real-time speech-to-caption system using speech recognition technology with a “repeat-speaking” method. In this system, they used a “repeat-speaker” who listens to a lecturer's voice and then speaks back the lecturer's speech utterances into a speech recognition computer. The througoing system showed that the accuracy of the captions is about 97% in Japanese-Japanese conversion and the conversion time from voices to captions is about 4 seconds in English-English conversion in some international conferences. Of course it required a lot of costs to achieve these high performances. In human communications, speech understanding depends not only on verbal information but also on non-verbal information such as speaker's gestures, and face and mouth movements. So the authors found the idea to display information of captions and speaker's face movement images with a suitable way to achieve a higher comprehension after storing information once into a computer briefly. In this paper, we investigate the relationship of the display sequence and display timing between captions that have speech recognition errors and the speaker's face movement images. The results show that the sequence “to display the caption before the speaker's face image” improves the comprehension of the captions. The sequence “to display both simultaneously” shows an improvement only a few percent higher than the question sentence, and the sequence “to display the speaker's face image before the caption” shows almost no change. In addition, the sequence “to display the caption 1 second before the speaker's face shows the most significant improvement of all the conditions.
Control of the seven-degree-of-freedom upper limb exoskeleton for an improved human-robot interface
NASA Astrophysics Data System (ADS)
Kim, Hyunchul; Kim, Jungsuk
2017-04-01
This study analyzes a practical scheme for controlling an exoskeleton robot with seven degrees of freedom (DOFs) that supports natural movements of the human arm. A redundant upper limb exoskeleton robot with seven DOFs is mechanically coupled to the human body such that it becomes a natural extension of the body. If the exoskeleton robot follows the movement of the human body synchronously, the energy exchange between the human and the robot will be reduced significantly. In order to achieve this, the redundancy of the human arm, which is represented by the swivel angle, should be resolved using appropriate constraints and applied to the robot. In a redundant 7-DOF upper limb exoskeleton, the pseudoinverse of the Jacobian with secondary objective functions is widely used to resolve the redundancy that defines the desired joint angles. A secondary objective function requires the desired joint angles for the movement of the human arm, and the angles are estimated by maximizing the projection of the longest principle axis of the manipulability ellipsoid for the human arm onto the virtual destination toward the head region. Then, they are fed into the muscle model with a relative damping to achieve more realistic robot-arm movements. Various natural arm movements are recorded using a motion capture system, and the actual swivel-angle is compared to that estimated using the proposed swivel angle estimation algorithm. The results indicate that the proposed algorithm provides a precise reference for estimating the desired joint angle with an error less than 5°.
Emotor control: computations underlying bodily resource allocation, emotions, and confidence
Kepecs, Adam; Mensh, Brett D.
2015-01-01
Emotional processes are central to behavior, yet their deeply subjective nature has been a challenge for neuroscientific study as well as for psychiatric diagnosis. Here we explore the relationships between subjective feelings and their underlying brain circuits from a computational perspective. We apply recent insights from systems neuroscience—approaching subjective behavior as the result of mental computations instantiated in the brain—to the study of emotions. We develop the hypothesis that emotions are the product of neural computations whose motor role is to reallocate bodily resources mostly gated by smooth muscles. This “emotor” control system is analagous to the more familiar motor control computations that coordinate skeletal muscle movements. To illustrate this framework, we review recent research on “confidence.” Although familiar as a feeling, confidence is also an objective statistical quantity: an estimate of the probability that a hypothesis is correct. This model-based approach helped reveal the neural basis of decision confidence in mammals and provides a bridge to the subjective feeling of confidence in humans. These results have important implications for psychiatry, since disorders of confidence computations appear to contribute to a number of psychopathologies. More broadly, this computational approach to emotions resonates with the emerging view that psychiatric nosology may be best parameterized in terms of disorders of the cognitive computations underlying complex behavior. PMID:26869840
Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni
2016-01-01
How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a “specialized” domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the “community structure” of the ToH and their difficulties in executing so-called “counterintuitive” movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand—a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits. PMID:27074140
Donnarumma, Francesco; Maisto, Domenico; Pezzulo, Giovanni
2016-04-01
How do humans and other animals face novel problems for which predefined solutions are not available? Human problem solving links to flexible reasoning and inference rather than to slow trial-and-error learning. It has received considerable attention since the early days of cognitive science, giving rise to well known cognitive architectures such as SOAR and ACT-R, but its computational and brain mechanisms remain incompletely known. Furthermore, it is still unclear whether problem solving is a "specialized" domain or module of cognition, in the sense that it requires computations that are fundamentally different from those supporting perception and action systems. Here we advance a novel view of human problem solving as probabilistic inference with subgoaling. In this perspective, key insights from cognitive architectures are retained such as the importance of using subgoals to split problems into subproblems. However, here the underlying computations use probabilistic inference methods analogous to those that are increasingly popular in the study of perception and action systems. To test our model we focus on the widely used Tower of Hanoi (ToH) task, and show that our proposed method can reproduce characteristic idiosyncrasies of human problem solvers: their sensitivity to the "community structure" of the ToH and their difficulties in executing so-called "counterintuitive" movements. Our analysis reveals that subgoals have two key roles in probabilistic inference and problem solving. First, prior beliefs on (likely) useful subgoals carve the problem space and define an implicit metric for the problem at hand-a metric to which humans are sensitive. Second, subgoals are used as waypoints in the probabilistic problem solving inference and permit to find effective solutions that, when unavailable, lead to problem solving deficits. Our study thus suggests that a probabilistic inference scheme enhanced with subgoals provides a comprehensive framework to study problem solving and its deficits.