Sample records for motion cueing algorithm

  1. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  2. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  3. A Nonlinear, Human-Centered Approach to Motion Cueing with a Neurocomputing Solver

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Cardullo, Frank M.; Houck, Jacob A.

    2002-01-01

    This paper discusses the continuation of research into the development of new motion cueing algorithms first reported in 1999. In this earlier work, two viable approaches to motion cueing were identified: the coordinated adaptive washout algorithm or 'adaptive algorithm', and the 'optimal algorithm'. In this study, a novel approach to motion cueing is discussed that would combine features of both algorithms. The new algorithm is formulated as a linear optimal control problem, incorporating improved vestibular models and an integrated visual-vestibular motion perception model previously reported. A control law is generated from the motion platform states, resulting in a set of nonlinear cueing filters. The time-varying control law requires the matrix Riccati equation to be solved in real time. Therefore, in order to meet the real time requirement, a neurocomputing approach is used to solve this computationally challenging problem. Single degree-of-freedom responses for the nonlinear algorithm were generated and compared to the adaptive and optimal algorithms. Results for the heave mode show the nonlinear algorithm producing a motion cue with a time-varying washout, sustaining small cues for a longer duration and washing out larger cues more quickly. The addition of the optokinetic influence from the integrated perception model was shown to improve the response to a surge input, producing a specific force response with no steady-state washout. Improved cues are also observed for responses to a sway input. Yaw mode responses reveal that the nonlinear algorithm improves the motion cues by reducing the magnitude of negative cues. The effectiveness of the nonlinear algorithm as compared to the adaptive and linear optimal algorithms will be evaluated on a motion platform, the NASA Langley Research Center Visual Motion Simulator (VMS), and ultimately the Cockpit Motion Facility (CMF) with a series of pilot controlled maneuvers. A proposed experimental procedure is discussed. The results of this evaluation will be used to assess motion cueing performance.

  4. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  5. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  6. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  7. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  8. Motion Cueing Algorithm Development: Initial Investigation and Redesign of the Algorithms

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Wu, Weimin; Cardullo, Frank M.; Houck, Jacob A. (Technical Monitor)

    2000-01-01

    In this project four motion cueing algorithms were initially investigated. The classical algorithm generated results with large distortion and delay and low magnitude. The NASA adaptive algorithm proved to be well tuned with satisfactory performance, while the UTIAS adaptive algorithm produced less desirable results. Modifications were made to the adaptive algorithms to reduce the magnitude of undesirable spikes. The optimal algorithm was found to have the potential for improved performance with further redesign. The center of simulator rotation was redefined. More terms were added to the cost function to enable more tuning flexibility. A new design approach using a Fortran/Matlab/Simulink setup was employed. A new semicircular canals model was incorporated in the algorithm. With these changes results show the optimal algorithm has some advantages over the NASA adaptive algorithm. Two general problems observed in the initial investigation required solutions. A nonlinear gain algorithm was developed that scales the aircraft inputs by a third-order polynomial, maximizing the motion cues while remaining within the operational limits of the motion system. A braking algorithm was developed to bring the simulator to a full stop at its motion limit and later release the brake to follow the cueing algorithm output.

  9. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  10. Two cloud-based cues for estimating scene structure and camera calibration.

    PubMed

    Jacobs, Nathan; Abrams, Austin; Pless, Robert

    2013-10-01

    We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant points. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of cloud shadows across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.

  11. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  12. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  13. A fast implementation of MPC-based motion cueing algorithms for mid-size road vehicle motion simulators

    NASA Astrophysics Data System (ADS)

    Bruschetta, M.; Maran, F.; Beghi, A.

    2017-06-01

    The use of dynamic driving simulators is constantly increasing in the automotive community, with applications ranging from vehicle development to rehab and driver training. The effectiveness of such devices is related to their capabilities of well reproducing the driving sensations, hence it is crucial that the motion control strategies generate both realistic and feasible inputs to the platform. Such strategies are called motion cueing algorithms (MCAs). In recent years several MCAs based on model predictive control (MPC) techniques have been proposed. The main drawback associated with the use of MPC is its computational burden, that may limit their application to high performance dynamic simulators. In the paper, a fast, real-time implementation of an MPC-based MCA for 9 DOF, high performance platform is proposed. Effectiveness of the approach in managing the available working area is illustrated by presenting experimental results from an implementation on a real device with a 200 Hz control frequency.

  14. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  15. Refinement of Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2017-01-01

    The objective of this paper is to refine objective motion cueing criteria for commercial transport simulators based on pilots' performance in three flying tasks. Actuator hardware and software algorithms determine motion cues. Today, during a simulator qualification, engineers objectively evaluate only the hardware. Pilot inspectors subjectively assess the overall motion cueing system (i.e., hardware plus software); however, it is acknowledged that pinpointing any deficiencies that might arise to either hardware or software is challenging. ICAO 9625 has an Objective Motion Cueing Test (OMCT), which is now a required test in the FAA's part 60 regulations for new devices, evaluating the software and hardware together; however, it lacks accompanying fidelity criteria. Hosman has documented OMCT results for a statistical sample of eight simulators which is useful, but having validated criteria would be an improvement. In a previous experiment, we developed initial objective motion cueing criteria that this paper is trying to refine. Sinacori suggested simple criteria which are in reasonable agreement with much of the literature. These criteria often necessitate motion displacements greater than most training simulators can provide. While some of the previous work has used transport aircraft in their studies, the majority used fighter aircraft or helicopters. Those that used transport aircraft considered degraded flight characteristics. As a result, earlier criteria lean more towards being sufficient, rather than necessary, criteria for typical transport aircraft training applications. Considering the prevalence of 60-inch, six-legged hexapod training simulators, a relevant question is "what are the necessary criteria that can be used with the ICAO 9625 diagnostic?" This study adds to the literature as follows. First, it examines well-behaved transport aircraft characteristics, but in three challenging tasks. The tasks are equivalent to the ones used in our previous experiment, allowing us to directly compare the results and add to the previous data. Second, it uses the Vertical Motion Simulator (VMS), the world's largest vertical displacement simulator. This allows inclusion of relatively large motion conditions, much larger than a typical training simulator can provide. Six new motion configurations were used that explore the motion responses between the initial objective motion cueing boundaries found in a previous experiment and what current hexapod simulators typically provide. Finally, a sufficiently large pilot pool added statistical reliability to the results.

  16. Vertical Motion Simulator Experiment on Stall Recovery Guidance

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan; Lombaerts, Thomas; Stepanyan, Vahram; Kaneshige, John; Shish, Kimberlee; Robinson, Peter; Hardy, Gordon H.

    2017-01-01

    A stall recovery guidance system was designed to help pilots improve their stall recovery performance when the current aircraft state may be unrecognized under various complicating operational factors. Candidate guidance algorithms were connected to the split-cue pitch and roll flight directors that are standard on large transport commercial aircraft. A new thrust guidance algorithm and cue was also developed to help pilots prevent the combination of excessive thrust and nose-up stabilizer trim. The overall system was designed to reinforce the current FAA recommended stall recovery procedure. A general transport aircraft model, similar to a Boeing 757, with an extended aerodynamic database for improved stall dynamics simulation fidelity was integrated into the Vertical Motion Simulator at NASA Ames Research Center. A detailed study of the guidance system was then conducted across four stall scenarios with 30 commercial and 10 research test pilots, and the results are reported.

  17. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

  18. The role of optical flow in automated quality assessment of full-motion video

    NASA Astrophysics Data System (ADS)

    Harguess, Josh; Shafer, Scott; Marez, Diego

    2017-09-01

    In real-world video data, such as full-motion-video (FMV) taken from unmanned vehicles, surveillance systems, and other sources, various corruptions to the raw data is inevitable. This can be due to the image acquisition process, noise, distortion, and compression artifacts, among other sources of error. However, we desire methods to analyze the quality of the video to determine whether the underlying content of the corrupted video can be analyzed by humans or machines and to what extent. Previous approaches have shown that motion estimation, or optical flow, can be an important cue in automating this video quality assessment. However, there are many different optical flow algorithms in the literature, each with their own advantages and disadvantages. We examine the effect of the choice of optical flow algorithm (including baseline and state-of-the-art), on motionbased automated video quality assessment algorithms.

  19. Evaluating Effectiveness of Modeling Motion System Feedback in the Enhanced Hess Structural Model of the Human Operator

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.

    2009-01-01

    In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.

  20. Driver hand activity analysis in naturalistic driving studies: challenges, algorithms, and experimental studies

    NASA Astrophysics Data System (ADS)

    Ohn-Bar, Eshed; Martin, Sujitha; Trivedi, Mohan Manubhai

    2013-10-01

    We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.

  1. Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald

    2014-01-01

    Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.

  2. Estimation of contour motion and deformation for nonrigid object tracking

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Porikli, Fatih; Chellappa, Rama

    2007-08-01

    We present an algorithm for nonrigid contour tracking in heavily cluttered background scenes. Based on the properties of nonrigid contour movements, a sequential framework for estimating contour motion and deformation is proposed. We solve the nonrigid contour tracking problem by decomposing it into three subproblems: motion estimation, deformation estimation, and shape regulation. First, we employ a particle filter to estimate the global motion parameters of the affine transform between successive frames. Then we generate a probabilistic deformation map to deform the contour. To improve robustness, multiple cues are used for deformation probability estimation. Finally, we use a shape prior model to constrain the deformed contour. This enables us to retrieve the occluded parts of the contours and accurately track them while allowing shape changes specific to the given object types. Our experiments show that the proposed algorithm significantly improves the tracker performance.

  3. Helicopter flight simulation motion platform requirements

    NASA Astrophysics Data System (ADS)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  4. The effects of motion and g-seat cues on pilot simulator performance of three piloting tasks

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.; Parris, B. L.

    1980-01-01

    Data are presented that show the effects of motion system cues, g-seat cues, and pilot experience on pilot performance during takeoffs with engine failures, during in-flight precision turns, and during landings with wind shear. Eight groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The basic cueing system was a fixed-base type (no-motion cueing) with visual cueing. The other three systems were produced by the presence of either a motion system or a g-seat, or both. Extensive statistical analysis of the data was performed and representative performance means were examined. These data show that the addition of motion system cueing results in significant improvement in pilot performance for all three tasks; however, the use of g-seat cueing, either alone or in conjunction with the motion system, provides little if any performance improvement for these tasks and for this aircraft type.

  5. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  6. Global motion compensated visual attention-based video watermarking

    NASA Astrophysics Data System (ADS)

    Oakes, Matthew; Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated wavelet-based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.

  7. Effect of motion cues during complex curved approach and landing tasks: A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Scanlon, Charles H.

    1987-01-01

    A piloted simulation study was conducted to examine the effect of motion cues using a high fidelity simulation of commercial aircraft during the performance of complex approach and landing tasks in the Microwave Landing System (MLS) signal environment. The data from these tests indicate that in a high complexity MLS approach task with moderate turbulence and wind, the pilot uses motion cues to improve path tracking performance. No significant differences in tracking accuracy were noted for the low and medium complexity tasks, regardless of the presence of motion cues. Higher control input rates were measured for all tasks when motion was used. Pilot eye scan, as measured by instrument dwell time, was faster when motion cues were used regardless of the complexity of the approach tasks. Pilot comments indicated a preference for motion. With motion cues, pilots appeared to work harder in all levels of task complexity and to improve tracking performance in the most complex approach task.

  8. A magnetorheological haptic cue accelerator for manual transmission vehicles

    NASA Astrophysics Data System (ADS)

    Han, Young-Min; Noh, Kyung-Wook; Lee, Yang-Sub; Choi, Seung-Bok

    2010-07-01

    This paper proposes a new haptic cue function for manual transmission vehicles to achieve optimal gear shifting. This function is implemented on the accelerator pedal by utilizing a magnetorheological (MR) brake mechanism. By combining the haptic cue function with the accelerator pedal, the proposed haptic cue device can transmit the optimal moment of gear shifting for manual transmission to a driver without requiring the driver's visual attention. As a first step to achieve this goal, a MR fluid-based haptic device is devised to enable rotary motion of the accelerator pedal. Taking into account spatial limitations, the design parameters are optimally determined using finite element analysis to maximize the relative control torque. The proposed haptic cue device is then manufactured and its field-dependent torque and time response are experimentally evaluated. Then the manufactured MR haptic cue device is integrated with the accelerator pedal. A simple virtual vehicle emulating the operation of the engine of a passenger vehicle is constructed and put into communication with the haptic cue device. A feed-forward torque control algorithm for the haptic cue is formulated and control performances are experimentally evaluated and presented in the time domain.

  9. An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

  10. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  11. Helicopter Flight Simulation Motion Platform Requirements

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffery Allyn

    1999-01-01

    To determine motion fidelity requirements, a series of piloted simulations was performed. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositioning. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  12. An Analytical Comparison of the Fidelity of "Large Motion" Versus "Small Motion" Flight Simulators in a Rotorcraft Side-Step Task

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1999-01-01

    This paper presents an analytical and experimental methodology for studying flight simulator fidelity. The task was a rotorcraft bob-up/down maneuver in which vertical acceleration constituted the motion cue. The task considered here is aside-step maneuver that differs from the bob-up one important way: both roll and lateral acceleration cues are available to the pilot. It has been communicated to the author that in some Verticle Motion Simulator (VMS) studies, the lateral acceleration cue has been found to be the most important. It is of some interest to hypothesize how this motion cue associated with "outer-loop" lateral translation fits into the modeling procedure where only "inner-loop " motion cues were considered. This Note is an attempt at formulating such an hypothesis and analytically comparing a large-motion simulator, e.g., the VMS, with a small-motion simulator, e.g., a hexapod.

  13. Neural Representation of Motion-In-Depth in Area MT

    PubMed Central

    Sanada, Takahisa M.

    2014-01-01

    Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481

  14. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  15. Visible propagation from invisible exogenous cueing.

    PubMed

    Lin, Zhicheng; Murray, Scott O

    2013-09-20

    Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.

  16. Effects of visual and motion simulation cueing systems on pilot performance during takeoffs with engine failures

    NASA Technical Reports Server (NTRS)

    Parris, B. L.; Cook, A. M.

    1978-01-01

    Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.

  17. Motion cue effects on human pilot dynamics in manual control

    NASA Technical Reports Server (NTRS)

    Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.

    1977-01-01

    Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.

  18. Biologically inspired computation and learning in Sensorimotor Systems

    NASA Astrophysics Data System (ADS)

    Lee, Daniel D.; Seung, H. S.

    2001-11-01

    Networking systems presently lack the ability to intelligently process the rich multimedia content of the data traffic they carry. Endowing artificial systems with the ability to adapt to changing conditions requires algorithms that can rapidly learn from examples. We demonstrate the application of such learning algorithms on an inexpensive quadruped robot constructed to perform simple sensorimotor tasks. The robot learns to track a particular object by discovering the salient visual and auditory cues unique to that object. The system uses a convolutional neural network that automatically combines color, luminance, motion, and auditory information. The weights of the networks are adjusted using feedback from a teacher to reflect the reliability of the various input channels in the surrounding environment. Additionally, the robot is able to compensate for its own motion by adapting the parameters of a vestibular ocular reflex system.

  19. A study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.

    1980-01-01

    NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.

  20. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  1. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  2. Visual Cues of Motion That Trigger Animacy Perception at Birth: The Case of Self-Propulsion

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Lunghi, Marco; Simion, Francesca; Vallortigara, Giorgio

    2017-01-01

    Self-propelled motion is a powerful cue that conveys information that an object is animate. In this case, animate refers to an entity's capacity to initiate motion without an applied external force. Sensitivity to this motion cue is present in infants that are a few months old, but whether this sensitivity is experience-dependent or is already…

  3. Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals.

    PubMed

    Joo, Sung Jun; Czuba, Thaddeus B; Cormack, Lawrence K; Huk, Alexander C

    2016-10-19

    Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. Copyright © 2016 the authors 0270-6474/16/3610791-12$15.00/0.

  4. Orientation selectivity sharpens motion detection in Drosophila

    PubMed Central

    Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.

    2015-01-01

    SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048

  5. Effects of Visual Propioceptive Cue Conflicts on Human Tracking Performance

    DTIC Science & Technology

    1977-06-01

    maintain adequate iwifommsie it *a necessaty for the Subjects to dusregaril sensations of motion. The results rewaead that the conditions of...discussions. Dr. George L. Smith served as the Graduate School Representative on the comittee. The research reported herein was conducted at the Advanced...where no motion cues art . provided or when Motion cues are inappropriate to actual flight conditions. The latter (i.e., inappropriate motion) has

  6. Davida Teller Award Lecture 2013: the importance of prediction and anticipation in the control of smooth pursuit eye movements.

    PubMed

    Kowler, Eileen; Aitkin, Cordelia D; Ross, Nicholas M; Santos, Elio M; Zhao, Min

    2014-05-16

    The ability of smooth pursuit eye movements to anticipate the future motion of targets has been known since the pioneering work of Dodge, Travis, and Fox (1930) and Westheimer (1954). This article reviews aspects of anticipatory smooth eye movements, focusing on the roles of the different internal or external cues that initiate anticipatory pursuit.We present new results showing that the anticipatory smooth eye movements evoked by different cues differ substantially, even when the cues are equivalent in the information conveyed about the direction of future target motion. Cues that convey an easily interpretable visualization of the motion path produce faster anticipatory smooth eye movements than the other cues tested, including symbols associated arbitrarily with the path, and the same target motion tested repeatedly over a block of trials. The differences among the cues may be understood within a common predictive framework in which the cues differ in the level of subjective certainty they provide about the future path. Pursuit may be driven by a combined signal in which immediate sensory motion, and the predictions about future motion generated by sets of cues, are weighted according to their respective levels of certainty. Anticipatory smooth eye movements, an overt indicator of expectations and predictions, may not be operating in isolation, but may be part of a global process in which the brain analyzes available cues, formulates predictions, and uses them to control perceptual, motor, and cognitive processes. © 2014 ARVO.

  7. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  8. Effects of grade and conditions of motion on children's interpretation of implied motion in pictures.

    PubMed

    Downs, E; Jenkins, S J

    1996-12-01

    Interpretation of motion under three levels of motion cues for 36 kindergarten and 36 third-grade children was examined. Analysis indicated that third-grade children were more skillful at identifying motion than kindergartners and postural cues were more effective than flow lines.

  9. Evaluation of a linear washout for simulator motion cue presentation during landing approach

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Martin, D. J., Jr.

    1975-01-01

    The comparison of a fixed-base versus a five-degree-of-freedom motion base simulation of a 737 conventional take-off and landing (CTOL) aircraft performing instrument landing system (ILS) landing approaches was used to evaluate a linear motion washout technique. The fact that the pilots felt that the addition of motion increased the pilot workload and this increase was not reflected in the objective data results, indicates that motion cues, as presented, are not a contributing factor to root-mean-square (rms) performance during the landing approach task. Subjective results from standard maneuvering about straight-and-level flight for specific motion cue evaluation revealed that the longitudinal channels (pitch and surge) possibly the yaw channel produce acceptable motions. The roll cue representation, involving both roll and sway channels, was found to be inadequate for large roll inputs, as used for example, in turn entries.

  10. Effects of Spatial Cueing on Representational Momentum

    ERIC Educational Resources Information Center

    Hubbard, Timothy L.; Kumar, Anuradha Mohan; Carp, Charlotte L.

    2009-01-01

    Effects of a spatial cue on representational momentum were examined. If a cue was present during or after target motion and indicated the location at which the target would vanish or had vanished, forward displacement of that target decreased. The decrease in forward displacement was larger when cues were present after target motion than when cues…

  11. A Pilot/Vehicle Model Analysis of the Effects of Motion Cues on Harrier Control Tasks.

    DTIC Science & Technology

    1983-09-01

    7 D- R136 291 A PILOT/VEHILE MODEL ANALYSIS OF THE EFFECTS OF MOTION i/i LS 91 CUES ON HARRIER C..(U) BOLT BERANEK AND NEWMAN INC CAMBRIDGE MA S...provided by well-designed platform motion systems , the actual rovement of performance or training effectiveness that results from incorporating these...for the Harrier AV-8B. The effects of providing motion cues via an idealized platform motion system or a g-seat device are predicted with the model, and

  12. Empirical comparison of a fixed-base and a moving-base simulation of a helicopter engaged in visually conducted slalom runs

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.

    1977-01-01

    Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.

  13. Effects of motion base and g-seat cueing of simulator pilot performance

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R.; Mckissick, B. T.; Parrish, R. V.

    1984-01-01

    In order to measure and analyze the effects of a motion plus g-seat cueing system, a manned-flight-simulation experiment was conducted utilizing a pursuit tracking task and an F-16 simulation model in the NASA Langley visual/motion simulator. This experiment provided the information necessary to determine whether motion and g-seat cues have an additive effect on the performance of this task. With respect to the lateral tracking error and roll-control stick force, the answer is affirmative. It is shown that presenting the two cues simultaneously caused significant reductions in lateral tracking error and that using the g-seat and motion base separately provided essentially equal reductions in the pilot's lateral tracking error.

  14. The Effectiveness of Simulator Motion in the Transfer of Performance on a Tracking Task Is Influenced by Vision and Motion Disturbance Cues.

    PubMed

    Grundy, John G; Nazar, Stefan; O'Malley, Shannon; Mohrenshildt, Martin V; Shedden, Judith M

    2016-06-01

    To examine the importance of platform motion to the transfer of performance in motion simulators. The importance of platform motion in simulators for pilot training is strongly debated. We hypothesized that the type of motion (e.g., disturbance) contributes significantly to performance differences. Participants used a joystick to perform a target tracking task in a pod on top of a MOOG Stewart motion platform. Five conditions compared training without motion, with correlated motion, with disturbance motion, with disturbance motion isolated to the visual display, and with both correlated and disturbance motion. The test condition involved the full motion model with both correlated and disturbance motion. We analyzed speed and accuracy across training and test as well as strategic differences in joystick control. Training with disturbance cues produced critical behavioral differences compared to training without disturbance; motion itself was less important. Incorporation of disturbance cues is a potentially important source of variance between studies that do or do not show a benefit of motion platforms in the transfer of performance in simulators. Potential applications of this research include the assessment of the importance of motion platforms in flight simulators, with a focus on the efficacy of incorporating disturbance cues during training. © 2016, Human Factors and Ergonomics Society.

  15. Replicating and extending Bourdon's (1902) experiment on motion parallax.

    PubMed

    Ono, Hiroshi; Lillakas, Linda; Kapoor, Anjani; Wong, Irene

    2013-01-01

    Bourdon conducted the first laboratory experiment on observer-produced motion parallax as a cue to depth. In three experiments, we replicated and extended Bourdon's experiment. In experiment 1, we reproduced his finding: when the two cues, motion parallax and relative height, were combined, accuracy of depth perception was high, and when the two cues were in conflict, accuracy was lower. In experiment 2, the relative height cue was replaced with relative retinal image size. As in experiment 1, when the two cues (motion parallax and relative retinal image size) were combined, accuracy was high, but when they were in conflict, it was lower. In experiment 3, the stimuli from experiments 1 and 2 were viewed monocularly with head movement and binocularly without head movement. In the binocular conditions, accuracy, certainty, and the extent of perceived depth were higher than in the monocular condition. In the conflict conditions, accuracy, certainty, and the extent of perceived depth were lower than in the no-conflict condition, but the extent of perceived motion was larger. These results are discussed in terms of recent findings about the effectiveness of motion parallax as a cue for depth.

  16. Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals

    PubMed Central

    Czuba, Thaddeus B.; Cormack, Lawrence K.; Huk, Alexander C.

    2016-01-01

    Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no “cross-cue” adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how—or indeed if—these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms. PMID:27798134

  17. Effects of set-size and selective spatial attention on motion processing.

    PubMed

    Dobkins, K R; Bosworth, R G

    2001-05-01

    In order to investigate the effects of divided attention and selective spatial attention on motion processing, we obtained direction-of-motion thresholds using a stochastic motion display under various attentional manipulations and stimulus durations (100-600 ms). To investigate divided attention, we compared motion thresholds obtained when a single motion stimulus was presented in the visual field (set-size=1) to those obtained when the motion stimulus was presented amongst three confusable noise distractors (set-size=4). The magnitude of the observed detriment in performance with an increase in set-size from 1 to 4 could be accounted for by a simple decision model based on signal detection theory, which assumes that attentional resources are not limited in capacity. To investigate selective attention, we compared motion thresholds obtained when a valid pre-cue alerted the subject to the location of the to-be-presented motion stimulus to those obtained when no pre-cue was provided. As expected, the effect of pre-cueing was large when the visual field contained noise distractors, an effect we attribute to "noise reduction" (i.e. the pre-cue allows subjects to exclude irrelevant distractors that would otherwise impair performance). In the single motion stimulus display, we found a significant benefit of pre-cueing only at short durations (< or =150 ms), a result that can potentially be explained by a "time-to-orient" hypothesis (i.e. the pre-cue improves performance by eliminating the time it takes to orient attention to a peripheral stimulus at its onset, thereby increasing the time spent processing the stimulus). Thus, our results suggest that the visual motion system can analyze several stimuli simultaneously without limitations on sensory processing per se, and that spatial pre-cueing serves to reduce the effects of distractors and perhaps increase the effective processing time of the stimulus.

  18. Design Definition Study Report. Full Crew Interaction Simulator-Laboratory Model (FCIS-LM) (Device X17B7). Volume II. Requirements.

    DTIC Science & Technology

    1978-06-01

    stimulate at-least three levels of crew function. At the most complex level, visual cues are used to discriminate the presence or activities of...limited to motion on- set cues washed out at subliminal levels.. Because of the cues they provide the driver, gunner, and commander, and the dis...motion, i.e.,which physiological receptors are affected, how they function,and how they may be stimulated by a simulator motion system. I Motion is

  19. Visual Features Involving Motion Seen from Airport Control Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion

    2010-01-01

    Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.

  20. Capture by colour: evidence for dimension-specific singleton capture.

    PubMed

    Harris, Anthony M; Becker, Stefanie I; Remington, Roger W

    2015-10-01

    Previous work on attentional capture has shown the attentional system to be quite flexible in the stimulus properties it can be set to respond to. Several different attentional "modes" have been identified. Feature search mode allows attention to be set for specific features of a target (e.g., red). Singleton detection mode sets attention to respond to any discrepant item ("singleton") in the display. Relational search sets attention for the relative properties of the target in relation to the distractors (e.g., redder, larger). Recently, a new attentional mode was proposed that sets attention to respond to any singleton within a particular feature dimension (e.g., colour; Folk & Anderson, 2010). We tested this proposal against the predictions of previously established attentional modes. In a spatial cueing paradigm, participants searched for a colour target that was randomly either red or green. The nature of the attentional control setting was probed by presenting an irrelevant singleton cue prior to the target display and assessing whether it attracted attention. In all experiments, the cues were red, green, blue, or a white stimulus rapidly rotated (motion cue). The results of three experiments support the existence of a "colour singleton set," finding that all colour cues captured attention strongly, while motion cues captured attention only weakly or not at all. Notably, we also found that capture by motion cues in search for colour targets was moderated by their frequency; rare motion cues captured attention (weakly), while frequent motion cues did not.

  1. Effects of False Tilt Cues on the Training of Manual Roll Control Skills

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Popovici, Alexandru; Zavala, Melinda A.

    2015-01-01

    This paper describes a transfer-of-training study performed in the NASA Ames Vertica lMotion Simulator. The purpose of the study was to investigate the effect of false tilt cues on training and transfer of training of manual roll control skills. Of specific interest were the skills needed to control unstable roll dynamics of a mid-size transport aircraft close to the stall point. Nineteen general aviation pilots trained on a roll control task with one of three motion conditions: no motion, roll motion only, or reduced coordinated roll motion. All pilots transferred to full coordinated roll motion in the transfer session. A novel multimodal pilot model identification technique was successfully applied to characterize how pilots' use of visual and motion cues changed over the course of training and after transfer. Pilots who trained with uncoordinated roll motion had significantly higher performance during training and after transfer, even though they experienced the false tilt cues. Furthermore, pilot control behavior significantly changed during the two sessions, as indicated by increasing visual and motion gains, and decreasing lead time constants. Pilots training without motion showed higher learning rates after transfer to the full coordinated roll motion case.

  2. The Effects of Various Fidelity Factors on Simulated Helicopter Hover

    DTIC Science & Technology

    1981-01-01

    18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation

  3. Motion/visual cueing requirements for vortex encounters during simulated transport visual approach and landing

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Bowles, R. L.

    1983-01-01

    This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.

  4. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  5. Anticipatory smooth eye movements with random-dot kinematograms

    PubMed Central

    Santos, Elio M.; Gnang, Edinah K.; Kowler, Eileen

    2012-01-01

    Anticipatory smooth eye movements were studied in response to expectations of motion of random-dot kinematograms (RDKs). Dot lifetime was limited (52–208 ms) to prevent selection and tracking of the motion of local elements and to disrupt the perception of an object moving across space. Anticipatory smooth eye movements were found in response to cues signaling the future direction of global RDK motion, either prior to the onset of the RDK or prior to a change in its direction of motion. Cues signaling the lifetime of the dots were not effective. These results show that anticipatory smooth eye movements can be produced by expectations of global motion and do not require a sustained representation of an object or set of objects moving across space. At the same time, certain properties of global motion (direction) were more sensitive to cues than others (dot lifetime), suggesting that the rules by which prediction operates to influence pursuit may go beyond simple associations between cues and the upcoming motion of targets. PMID:23027686

  6. Roll tracking effects of G-vector tilt and various types of motion washout

    NASA Technical Reports Server (NTRS)

    Jex, H. R.; Magdaleno, R. E.; Junker, A. M.

    1978-01-01

    In a dogfight scenario, the task was to follow the target's roll angle while suppressing gust disturbances. All subjects adopted the same behavioral strategies in following the target while suppressing the gusts, and the MFP-fitted math model response was generally within one data symbol width. The results include the following: (1) comparisons of full roll motion (both with and without the spurious gravity tilt cue) with the static case. These motion cues help suppress disturbances with little net effect on the visual performance. Tilt cues were clearly used by the pilots but gave only small improvement in tracking errors. (2) The optimum washout (in terms of performance close to real world, similar behavioral parameters, significant motion attenuation (60 percent), and acceptable motion fidelity) was the combined attenuation and first-order washout. (3) Various trends in parameters across the motion conditions were apparent, and are discussed with respect to a comprehensive model for predicting adaptation to various roll motion cues.

  7. Displacement of location in illusory line motion.

    PubMed

    Hubbard, Timothy L; Ruppel, Susan E

    2013-05-01

    Six experiments examined displacement in memory for the location of the line in illusory line motion (ILM; appearance or disappearance of a stationary cue is followed by appearance of a stationary line that is presented all at once, but the stationary line is perceived to "unfold" or "be drawn" from the end closest to the cue to the end most distant from the cue). If ILM was induced by having a single cue appear, then memory for the location of the line was displaced toward the cue, and displacement was larger if the line was closer to the cue. If ILM was induced by having one of two previously visible cues vanish, then memory for the location of the line was displaced away from the cue that vanished. In general, the magnitude of displacement increased and then decreased as retention interval increased from 50 to 250 ms and from 250 to 450 ms, respectively. Displacement of the line (a) is consistent with a combination of a spatial averaging of the locations of the cue and the line with a relatively weaker dynamic in the direction of illusory motion, (b) might be implemented in a spreading activation network similar to networks previously suggested to implement displacement resulting from implied or apparent motion, and (c) provides constraints and challenges for theories of ILM.

  8. An investigation of motion base cueing and G-seat cueing on pilot performance in a simulator

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.

    1983-01-01

    The effect of G-seat cueing (GSC) and motion-base cueing (MBC) on performance of a pursuit-tracking task is studied using the visual motion simulator (VMS) at Langley Research Center. The G-seat, the six-degree-of-freedom synergistic platform motion system, the visual display, the cockpit hardware, and the F-16 aircraft mathematical model are characterized. Each of 8 active F-15 pilots performed the 2-min-43-sec task 10 times for each experimental mode: no cue, GSC, MBC, and GSC + MBC; the results were analyzed statistically in terms of the RMS values of vertical and lateral tracking error. It is shown that lateral error is significantly reduced by either GSC or MBC, and that the combination of cues produces a further, significant decrease. Vertical error is significantly decreased by GSC with or without MBC, whereas MBC effects vary for different pilots. The pattern of these findings is roughly duplicated in measurements of stick force applied for roll and pitch correction.

  9. The perception of ego-motion change in environments with varying depth: Interaction of stereo and optic flow.

    PubMed

    Ott, Florian; Pohl, Ladina; Halfmann, Marc; Hardiess, Gregor; Mallot, Hanspeter A

    2016-07-01

    When estimating ego-motion in environments (e.g., tunnels, streets) with varying depth, human subjects confuse ego-acceleration with environment narrowing and ego-deceleration with environment widening. Festl, Recktenwald, Yuan, and Mallot (2012) demonstrated that in nonstereoscopic viewing conditions, this happens despite the fact that retinal measurements of acceleration rate-a variable related to tau-dot-should allow veridical perception. Here we address the question of whether additional depth cues (specifically binocular stereo, object occlusion, or constant average object size) help break the confusion between narrowing and acceleration. Using a forced-choice paradigm, the confusion is shown to persist even if unambiguous stereo information is provided. The confusion can also be demonstrated in an adjustment task in which subjects were asked to keep a constant speed in a tunnel with varying diameter: Subjects increased speed in widening sections and decreased speed in narrowing sections even though stereoscopic depth information was provided. If object-based depth information (stereo, occlusion, constant average object size) is added, the confusion between narrowing and acceleration still remains but may be slightly reduced. All experiments are consistent with a simple matched filter algorithm for ego-motion detection, neglecting both parallactic and stereoscopic depth information, but leave open the possibility of cue combination at a later stage.

  10. Stereomotion is processed by the third-order motion system: reply to comment on Three-systems theory of human visual motion perception: review and update

    NASA Astrophysics Data System (ADS)

    Lu, Zhong-Lin; Sperling, George

    2002-10-01

    Two theories are considered to account for the perception of motion of depth-defined objects in random-dot stereograms (stereomotion). In the LuSperling three-motion-systems theory J. Opt. Soc. Am. A 18 , 2331 (2001), stereomotion is perceived by the third-order motion system, which detects the motion of areas defined as figure (versus ground) in a salience map. Alternatively, in his comment J. Opt. Soc. Am. A 19 , 2142 (2002), Patterson proposes a low-level motion-energy system dedicated to stereo depth. The critical difference between these theories is the preprocessing (figureground based on depth and other cues versus simply stereo depth) rather than the motion-detection algorithm itself (because the motion-extraction algorithm for third-order motion is undetermined). Furthermore, the ability of observers to perceive motion in alternating feature displays in which stereo depth alternates with other features such as texture orientation indicates that the third-order motion system can perceive stereomotion. This reduces the stereomotion question to Is it third-order alone or third-order plus dedicated depth-motion processing? Two new experiments intended to support the dedicated depth-motion processing theory are shown here to be perfectly accounted for by third-order motion, as are many older experiments that have previously been shown to be consistent with third-order motion. Cyclopean and rivalry images are shown to be a likely confound in stereomotion studies, rivalry motion being as strong as stereomotion. The phase dependence of superimposed same-direction stereomotion stimuli, rivalry stimuli, and isoluminant color stimuli indicates that these stimuli are processed in the same (third-order) motion system. The phase-dependence paradigm Lu and Sperling, Vision Res. 35 , 2697 (1995) ultimately can resolve the question of which types of signals share a single motion detector. All the evidence accumulated so far is consistent with the three-motion-systems theory. 2002 Optical Society of America

  11. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  12. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor

    PubMed Central

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639

  13. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  14. Control of a haptic gear shifting assistance device utilizing a magnetorheological clutch

    NASA Astrophysics Data System (ADS)

    Han, Young-Min; Choi, Seung-Bok

    2014-10-01

    This paper proposes a haptic clutch driven gear shifting assistance device that can help when the driver shifts the gear of a transmission system. In order to achieve this goal, a magnetorheological (MR) fluid-based clutch is devised to be capable of the rotary motion of an accelerator pedal to which the MR clutch is integrated. The proposed MR clutch is then manufactured, and its transmission torque is experimentally evaluated according to the magnetic field intensity. The manufactured MR clutch is integrated with the accelerator pedal to transmit a haptic cue signal to the driver. The impending control issue is to cue the driver to shift the gear via the haptic force. Therefore, a gear-shifting decision algorithm is constructed by considering the vehicle engine speed concerned with engine combustion dynamics, vehicle dynamics and driving resistance. Then, the algorithm is integrated with a compensation strategy for attaining the desired haptic force. In this work, the compensator is also developed and implemented through the discrete version of the inverse hysteretic model. The control performances, such as the haptic force tracking responses and fuel consumption, are experimentally evaluated.

  15. Attributing intentions to random motion engages the posterior superior temporal sulcus.

    PubMed

    Lee, Su Mei; Gao, Tao; McCarthy, Gregory

    2014-01-01

    The right posterior superior temporal sulcus (pSTS) is a neural region involved in assessing the goals and intentions underlying the motion of social agents. Recent research has identified visual cues, such as chasing, that trigger animacy detection and intention attribution. When readily available in a visual display, these cues reliably activate the pSTS. Here, using functional magnetic resonance imaging, we examined if attributing intentions to random motion would likewise engage the pSTS. Participants viewed displays of four moving circles and were instructed to search for chasing or mirror-correlated motion. On chasing trials, one circle chased another circle, invoking the percept of an intentional agent; while on correlated motion trials, one circle's motion was mirror reflected by another. On the remaining trials, all circles moved randomly. As expected, pSTS activation was greater when participants searched for chasing vs correlated motion when these cues were present in the displays. Of critical importance, pSTS activation was also greater when participants searched for chasing compared to mirror-correlated motion when the displays in both search conditions were statistically identical random motion. We conclude that pSTS activity associated with intention attribution can be invoked by top-down processes in the absence of reliable visual cues for intentionality.

  16. Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.

    1995-01-01

    The objective is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness.

  17. Experimental measurements of motion cue effects on STOL approach tasks

    NASA Technical Reports Server (NTRS)

    Ringland, R. F.; Stapleford, R. L.

    1972-01-01

    An experimental program to investigate the effects of motion cues on STOL approach is presented. The simulator used was the Six-Degrees-of-Freedom Motion Simulator (S.01) at Ames Research Center of NASA which has ?2.7 m travel longitudinally and laterally and ?2.5 m travel vertically. Three major experiments, characterized as tracking tasks, were conducted under fixed and moving base conditions: (1) A simulated IFR approach of the Augmentor Wing Jet STOL Research Aircraft (AWJSRA), (2) a simulated VFR task with the same aircraft, and (3) a single-axis task having only linear acceleration as the motion cue. Tracking performance was measured in terms of the variances of several motion variables, pilot vehicle describing functions, and pilot commentary.

  18. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  19. He Throws like a Girl (but Only when He's Sad): Emotion Affects Sex-Decoding of Biological Motion Displays

    ERIC Educational Resources Information Center

    Johnson, Kerri L.; McKay, Lawrie S.; Pollick, Frank E.

    2011-01-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming…

  20. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  1. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  2. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  3. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  4. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion

    PubMed Central

    Calabro, Finnegan J.; Vaina, Lucia Maria

    2016-01-01

    Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114

  5. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.

    PubMed

    Calabro, Finnegan J; Vaina, Lucia Maria

    2016-05-27

    BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.

  6. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  7. Near-optimal integration of facial form and motion.

    PubMed

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  8. Human Perception of Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Zhang, Guan-Lu

    2010-01-01

    Human daily activities on Earth involve motions that elicit both tilt and translation components of the head (i.e. gazing and locomotion). With otolith cues alone, tilt and translation can be ambiguous since both motions can potentially displace the otolithic membrane by the same magnitude and direction. Transitions between gravity environments (i.e. Earth, microgravity and lunar) have demonstrated to alter the functions of the vestibular system and exacerbate the ambiguity between tilt and translational motion cues. Symptoms of motion sickness and spatial disorientation can impair human performances during critical mission phases. Specifically, Space Shuttle landing records show that particular cases of tilt-translation illusions have impaired the performance of seasoned commanders. This sensorimotor condition is one of many operational risks that may have dire implications on future human space exploration missions. The neural strategy with which the human central nervous system distinguishes ambiguous inertial motion cues remains the subject of intense research. A prevailing theory in the neuroscience field proposes that the human brain is able to formulate a neural internal model of ambiguous motion cues such that tilt and translation components can be perceptually decomposed in order to elicit the appropriate bodily response. The present work uses this theory, known as the GIF resolution hypothesis, as the framework for experimental hypothesis. Specifically, two novel motion paradigms are employed to validate the neural capacity of ambiguous inertial motion decomposition in ground-based human subjects. The experimental setup involves the Tilt-Translation Sled at Neuroscience Laboratory of NASA JSC. This two degree-of-freedom motion system is able to tilt subjects in the pitch plane and translate the subject along the fore-aft axis. Perception data will be gathered through subject verbal reports. Preliminary analysis of perceptual data does not indicate that the GIF resolution hypothesis is completely valid for non-rotational periodic motions. Additionally, human perception of translation is impaired without visual or spatial reference. The performance of ground-base subjects in estimating tilt after brief training is comparable with that of crewmembers without training.

  9. Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clement, G. R.; Rupert, A. H.; Reschke, M. F.; Harm, D. L.; Guedry, F. E.

    2007-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth s gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of tilt-translation disturbances during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.

  10. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  11. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    PubMed

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  12. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated

    PubMed Central

    Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor

    2015-01-01

    Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650

  13. Hybrid generative-discriminative human action recognition by combining spatiotemporal words with supervised topic models

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Wang, Cheng; Wang, Boliang

    2011-02-01

    We present a hybrid generative-discriminative learning method for human action recognition from video sequences. Our model combines a bag-of-words component with supervised latent topic models. A video sequence is represented as a collection of spatiotemporal words by extracting space-time interest points and describing these points using both shape and motion cues. The supervised latent Dirichlet allocation (sLDA) topic model, which employs discriminative learning using labeled data under a generative framework, is introduced to discover the latent topic structure that is most relevant to action categorization. The proposed algorithm retains most of the desirable properties of generative learning while increasing the classification performance though a discriminative setting. It has also been extended to exploit both labeled data and unlabeled data to learn human actions under a unified framework. We test our algorithm on three challenging data sets: the KTH human motion data set, the Weizmann human action data set, and a ballet data set. Our results are either comparable to or significantly better than previously published results on these data sets and reflect the promise of hybrid generative-discriminative learning approaches.

  14. The contribution of dynamic visual cues to audiovisual speech perception.

    PubMed

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Analytical evaluation of two motion washout techniques

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1977-01-01

    Practical tools were developed which extend the state of the art of moving base flight simulation for research and training purposes. The use of visual and vestibular cues to minimize the actual motion of the simulator itself was a primary consideration. The investigation consisted of optimum programming of motion cues based on a physiological model of the vestibular system to yield 'ideal washout logic' for any given simulator constraints.

  16. The role of eye movements in depth from motion parallax during infancy

    PubMed Central

    Nawrot, Elizabeth; Nawrot, Mark

    2013-01-01

    Motion parallax is a motion-based, monocular depth cue that uses an object's relative motion and velocity as a cue to relative depth. In adults, and in monkeys, a smooth pursuit eye movement signal is used to disambiguate the depth-sign provided by these relative motion cues. The current study investigates infants' perception of depth from motion parallax and the development of two oculomotor functions, smooth pursuit and the ocular following response (OFR) eye movements. Infants 8 to 20 weeks of age were presented with three tasks in a single session: depth from motion parallax, smooth pursuit tracking, and OFR to translation. The development of smooth pursuit was significantly related to age, as was sensitivity to motion parallax. OFR eye movements also corresponded to both age and smooth pursuit gain, with groups of infants demonstrating asymmetric function in both types of eye movements. These results suggest that the development of the eye movement system may play a crucial role in the sensitivity to depth from motion parallax in infancy. Moreover, describing the development of these oculomotor functions in relation to depth perception may aid in the understanding of certain visual dysfunctions. PMID:24353309

  17. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  18. Space, color, and direction of movement: how do they affect attention?

    PubMed

    Verghese, Ashika; Anderson, Andrew J; Vidyasagar, Trichur R

    2013-07-19

    Paying attention improves performance, but is this improvement regardless of what we attend to? We explored the differences in performance between attending to a location and attending to a feature when perceiving global motion. Attention was first cued to one of four locations that had coherently moving dots, while the remaining three had randomly moving distracter dots. Participants then viewed a colored display, wherein the color of the coherently moving dots was cued instead of location. In the third task, participants identified the location that had a particular cued direction of motion. Most observers reported reductions of motion threshold in all three tasks compared to when no cue was provided. However, the attentional bias generated by location cues was significantly larger than the bias resulting from feature cues of direction or color. This effect is consistent with the idea that attention is largely controlled by a fronto-parietal network where spatial relations are preferentially processed. On the other hand, color could not be used as a cue to focus attention and integrate motion. This finding suggests that color relies heavily on processing by ventral temporal cortical areas, which may have little control over the global motion areas in the dorsal part of the brain.

  19. Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity.

    PubMed

    Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L

    2016-08-02

    Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.

  20. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    PubMed Central

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214

  1. Form and motion make independent contributions to the response to biological motion in occipitotemporal cortex.

    PubMed

    Thompson, James C; Baccus, Wendy

    2012-01-02

    Psychophysical and computational studies have provided evidence that both form and motion cues are used in the perception of biological motion. However, neuroimaging and neurophysiological studies have suggested that the neural processing of actions in temporal cortex might rely on form cues alone. Here we examined the contribution of form and motion to the spatial pattern of response to biological motion in ventral and lateral occipitotemporal cortex, using functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA). We found that selectivity to intact versus scrambled biological motion in lateral occipitotemporal cortex was correlated with selectivity for bodies and not for motion. However, this appeared to be due to the fact that subtracting scrambled from intact biological motion removes any contribution of local motion cues. Instead, we found that form and motion made independent contributions to the spatial pattern of responses to biological motion in lateral occipitotemporal regions MT, MST, and the extrastriate body area. The motion contribution was position-dependent, and consistent with the representation of contra- and ipsilateral visual fields in MT and MST. In contrast, only form contributed to the response to biological motion in the fusiform body area, with a bias towards central versus peripheral presentation. These results indicate that the pattern of response to biological motion in ventral and lateral occipitotemporal cortex reflects the linear combination of responses to form and motion. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Relation of motion sickness susceptibility to vestibular and behavioral measures of orientation

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.

    1994-01-01

    The objective of this proposal is to determine the relationship of motion sickness susceptibility to vestibulo-ocular reflexes (VOR), motion perception, and behavioral utilization of sensory orientation cues for the control of postural equilibrium. The work is focused on reflexes and motion perception associated with pitch and roll movements that stimulate the vertical semicircular canals and otolith organs of the inner ear. This work is relevant to the space motion sickness problem since 0 g related sensory conflicts between vertical canal and otolith motion cues are a likely cause of space motion sickness. Results of experimentation are summarized and modifications to a two-axis rotation device are described. Abstracts of a number of papers generated during the reporting period are appended.

  3. Efficiency of extracting stereo-driven object motions

    PubMed Central

    Jain, Anshul; Zaidi, Qasim

    2013-01-01

    Most living things and many nonliving things deform as they move, requiring observers to separate object motions from object deformations. When the object is partially occluded, the task becomes more difficult because it is not possible to use two-dimensional (2-D) contour correlations (Cohen, Jain, & Zaidi, 2010). That leaves dynamic depth matching across the unoccluded views as the main possibility. We examined the role of stereo cues in extracting motion of partially occluded and deforming three-dimensional (3-D) objects, simulated by disk-shaped random-dot stereograms set at randomly assigned depths and placed uniformly around a circle. The stereo-disparities of the disks were temporally oscillated to simulate clockwise or counterclockwise rotation of the global shape. To dynamically deform the global shape, random disparity perturbation was added to each disk's depth on each stimulus frame. At low perturbation, observers reported rotation directions consistent with the global shape, even against local motion cues, but performance deteriorated at high perturbation. Using 3-D global shape correlations, we formulated an optimal Bayesian discriminator for rotation direction. Based on rotation discrimination thresholds, human observers were 75% as efficient as the optimal model, demonstrating that global shapes derived from stereo cues facilitate inferences of object motions. To complement reports of stereo and motion integration in extrastriate cortex, our results suggest the possibilities that disparity selectivity and feature tracking are linked, or that global motion selective neurons can be driven purely from disparity cues. PMID:23325345

  4. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    PubMed

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Non-rigid, but not rigid, motion interferes with the processing of structural face information in developmental prosopagnosia.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2015-04-01

    There is growing evidence to suggest that facial motion is an important cue for face recognition. However, it is poorly understood whether motion is integrated with facial form information or whether it provides an independent cue to identity. To provide further insight into this issue, we compared the effect of motion on face perception in two developmental prosopagnosics and age-matched controls. Participants first learned faces presented dynamically (video), or in a sequence of static images, in which rigid (viewpoint) or non-rigid (expression) changes occurred. Immediately following learning, participants were required to match a static face image to the learned face. Test face images varied by viewpoint (Experiment 1) or expression (Experiment 2) and were learned or novel face images. We found similar performance across prosopagnosics and controls in matching facial identity across changes in viewpoint when the learned face was shown moving in a rigid manner. However, non-rigid motion interfered with face matching across changes in expression in both individuals with prosopagnosia compared to the performance of control participants. In contrast, non-rigid motion did not differentially affect the matching of facial expressions across changes in identity for either prosopagnosics (Experiment 3). Our results suggest that whilst the processing of rigid motion information of a face may be preserved in developmental prosopagnosia, non-rigid motion can specifically interfere with the representation of structural face information. Taken together, these results suggest that both form and motion cues are important in face perception and that these cues are likely integrated in the representation of facial identity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Effects of Motion Cues on the Training of Multi-Axis Manual Control Skills

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Mobertz, Xander R. I.

    2017-01-01

    The study described in this paper investigated the effects of two different hexapod motion configurations on the training and transfer of training of a simultaneous roll and pitch control task. Pilots were divided between two groups which trained either under a baseline hexapod motion condition, with motion typically provided by current training simulators, or an optimized hexapod motion condition, with increased fidelity of the motion cues most relevant for the task. All pilots transferred to the same full-motion condition, representing motion experienced in flight. A cybernetic approach was used that gave insights into the development of pilots use of visual and motion cues over the course of training and after transfer. Based on the current results, neither of the hexapod motion conditions can unambiguously be chosen as providing the best motion for training and transfer of training of the used multi-axis control task. However, the optimized hexapod motion condition did allow pilots to generate less visual lead, control with higher gains, and have better disturbance-rejection performance at the end of the training session compared to the baseline hexapod motion condition. Significant adaptations in control behavior still occurred in the transfer phase under the full-motion condition for both groups. Pilots behaved less linearly compared to previous single-axis control-task experiments; however, this did not result in smaller motion or learning effects. Motion and learning effects were more pronounced in pitch compared to roll. Finally, valuable lessons were learned that allow us to improve the adopted approach for future transfer-of-training studies.

  7. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  8. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues.

    PubMed

    Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji

    2017-01-01

    We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka.

  9. Three-dimensional computer graphic animations for studying social approach behaviour in medaka fish: Effects of systematic manipulation of morphological and motion cues

    PubMed Central

    Nakayasu, Tomohiro; Yasugi, Masaki; Shiraishi, Soma; Uchida, Seiichi; Watanabe, Eiji

    2017-01-01

    We studied social approach behaviour in medaka fish using three-dimensional computer graphic (3DCG) animations based on the morphological features and motion characteristics obtained from real fish. This is the first study which used 3DCG animations and examined the relative effects of morphological and motion cues on social approach behaviour in medaka. Various visual stimuli, e.g., lack of motion, lack of colour, alternation in shape, lack of locomotion, lack of body motion, and normal virtual fish in which all four features (colour, shape, locomotion, and body motion) were reconstructed, were created and presented to fish using a computer display. Medaka fish presented with normal virtual fish spent a long time in proximity to the display, whereas time spent near the display was decreased in other groups when compared with normal virtual medaka group. The results suggested that the naturalness of visual cues contributes to the induction of social approach behaviour. Differential effects between body motion and locomotion were also detected. 3DCG animations can be a useful tool to study the mechanisms of visual processing and social behaviour in medaka. PMID:28399163

  10. First impressions: gait cues drive reliable trait judgements.

    PubMed

    Thoresen, John C; Vuong, Quoc C; Atkinson, Anthony P

    2012-09-01

    Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Integration of visual and motion cues for flight simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.

  12. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  13. Spatial constraints of stereopsis in video displays

    NASA Technical Reports Server (NTRS)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  14. Talking heads or talking eyes? Effects of head orientation and sudden onset gaze cues on attention capture.

    PubMed

    van der Wel, Robrecht P; Welsh, Timothy; Böckler, Anne

    2018-01-01

    The direction of gaze towards or away from an observer has immediate effects on attentional processing in the observer. Previous research indicates that faces with direct gaze are processed more efficiently than faces with averted gaze. We recently reported additional processing advantages for faces that suddenly adopt direct gaze (abruptly shift from averted to direct gaze) relative to static direct gaze (always in direct gaze), sudden averted gaze (abruptly shift from direct to averted gaze), and static averted gaze (always in averted gaze). Because changes in gaze orientation in previous study co-occurred with changes in head orientation, it was not clear if the effect is contingent on face or eye processing, or whether it requires both the eyes and the face to provide consistent information. The present study delineates the impact of head orientation, sudden onset motion cues, and gaze cues. Participants completed a target-detection task in which head position remained in a static averted or direct orientation while sudden onset motion and eye gaze cues were manipulated within each trial. The results indicate a sudden direct gaze advantage that resulted from the additive role of motion and gaze cues. Interestingly, the orientation of the face towards or away from the observer did not influence the sudden direct gaze effect, suggesting that eye gaze cues, not face orientation cues, are critical for the sudden direct gaze effect.

  15. Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Braun, Marius; Leiner, Ulrich; Ruschin, Detlef

    2011-03-01

    The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.

  16. Research on integration of visual and motion cues for flight simulation and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Curry, R. E.

    1977-01-01

    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.

  17. A novel mechanism for mechanosensory-based rheotaxis in larval zebrafish.

    PubMed

    Oteiza, Pablo; Odstrcil, Iris; Lauder, George; Portugues, Ruben; Engert, Florian

    2017-07-27

    When flying or swimming, animals must adjust their own movement to compensate for displacements induced by the flow of the surrounding air or water. These flow-induced displacements can most easily be detected as visual whole-field motion with respect to the animal's frame of reference. Despite this, many aquatic animals consistently orient and swim against oncoming flows (a behaviour known as rheotaxis) even in the absence of visual cues. How animals achieve this task, and its underlying sensory basis, is still unknown. Here we show that, in the absence of visual information, larval zebrafish (Danio rerio) perform rheotaxis by using flow velocity gradients as navigational cues. We present behavioural data that support a novel algorithm based on such local velocity gradients that fish use to avoid getting dragged by flowing water. Specifically, we show that fish use their mechanosensory lateral line to first sense the curl (or vorticity) of the local velocity vector field to detect the presence of flow and, second, to measure its temporal change after swim bouts to deduce flow direction. These results reveal an elegant navigational strategy based on the sensing of flow velocity gradients and provide a comprehensive behavioural algorithm, also applicable for robotic design, that generalizes to a wide range of animal behaviours in moving fluids.

  18. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  19. Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco

    2013-05-01

    Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  1. Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2015-01-01

    This paper intends to help establish fidelity criteria to accompany the simulator motion system diagnostic test specified by the International Civil Aviation Organization. Twelve air- line transport pilots flew three tasks in the NASA Vertical Motion Simulator under four different motion conditions. The experiment used three different hexapod motion configurations, each with a different tradeoff between motion filter gain and break frequency, and one large motion configuration that utilized as much of the simulator's motion space as possible. The motion condition significantly affected: 1) pilot motion fidelity ratings, and sink rate and lateral deviation at touchdown for the approach and landing task, 2) pilot motion fidelity ratings, roll deviations, maximum pitch rate, and number of stick shaker activations in the stall task, and 3) heading deviation after an engine failure in the takeoff task. Significant differences in pilot-vehicle performance were used to define initial objective motion cueing criteria boundaries. These initial fidelity boundaries show promise but need refinement.

  2. Learning Cue Phrase Patterns from Radiology Reports Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Robert M; Beckerman, Barbara G; Potok, Thomas E

    2009-01-01

    Various computer-assisted technologies have been developed to assist radiologists in detecting cancer; however, the algorithms still lack high degrees of sensitivity and specificity, and must undergo machine learning against a training set with known pathologies in order to further refine the algorithms with higher validity of truth. This work describes an approach to learning cue phrase patterns in radiology reports that utilizes a genetic algorithm (GA) as the learning method. The approach described here successfully learned cue phrase patterns for two distinct classes of radiology reports. These patterns can then be used as a basis for automatically categorizing, clustering, ormore » retrieving relevant data for the user.« less

  3. Spatial attention can be biased towards an expected dimension.

    PubMed

    Burnett, Katherine E; Close, Alex C; d'Avossa, Giovanni; Sapir, Ayelet

    2016-11-01

    A commonly held view in both exogenous and endogenous orienting is that spatial attention is associated with enhanced processing of all stimuli at the attended location. However, we often search for a specific target at a particular location, so an observer should be able to jointly specify the target identity and expected location. Whether attention can bias dimension-specific processes at a particular location is not yet clear. We used a dual task to examine the effects of endogenous spatial cues on the accuracy of perceptual judgments of different dimensions. Participants responded to a motion target and a colour target, presented at the same or different locations. We manipulated a central cue to predict the location of the motion or colour target. While overall performance in the two tasks was comparable, cueing effects were larger for the target whose location was predicted by the cue, implying that when attending a particular location, processing of the likely dimension was preferentially enhanced. Additionally, an asymmetry between the motion and colour tasks was seen; motion was modulated by attention, and colour was not. We conclude that attention has some ability to select a dimension at a particular location, indicating integration of spatial and feature-based attention.

  4. An Investigation of Visual, Aural, Motion and Control Movement Cues.

    ERIC Educational Resources Information Center

    Matheny, W. G.; And Others

    A study was conducted to determine the ways in which multi-sensory cues can be simulated and effectively used in the training of pilots. Two analytical bases, one called the stimulus environment approach and the other an information array approach, are developed along with a cue taxonomy. Cues are postulated on the basis of information gained from…

  5. Hand motion segmentation against skin colour background in breast awareness applications.

    PubMed

    Hu, Yuqin; Naguib, Raouf N G; Todman, Alison G; Amin, Saad A; Al-Omishy, Hassanein; Oikonomou, Andreas; Tucker, Nick

    2004-01-01

    Skin colour modelling and classification play significant roles in face and hand detection, recognition and tracking. A hand is an essential tool used in breast self-examination, which needs to be detected and analysed during the process of breast palpation. However, the background of a woman's moving hand is her breast that has the same or similar colour as the hand. Additionally, colour images recorded by a web camera are strongly affected by the lighting or brightness conditions. Hence, it is a challenging task to segment and track the hand against the breast without utilising any artificial markers, such as coloured nail polish. In this paper, a two-dimensional Gaussian skin colour model is employed in a particular way to identify a breast but not a hand. First, an input image is transformed to YCbCr colour space, which is less sensitive to the lighting conditions and more tolerant of skin tone. The breast, thus detected by the Gaussian skin model, is used as the baseline or framework for the hand motion. Secondly, motion cues are used to segment the hand motion against the detected baseline. Desired segmentation results have been achieved and the robustness of this algorithm is demonstrated in this paper.

  6. Oculometric indices of simulator and aircraft motion

    NASA Technical Reports Server (NTRS)

    Comstock, J. R.

    1984-01-01

    The effects on eye scan behavior of both simulator and aircraft motion and sensitivity of an oculometric measure to motion effects was demonstrated. It was found that fixation time is sensitive to motion effects. Differences between simulator motion and no motion conditions during a series of simulated ILS approaches were studied. The mean fixation time for the no motion condition was found to be significantly longer than for the motion conditions. Eye scan parameters based on data collected in flight, and in fixed base simulation were investigated. Motion effects were evident when the subject was viewing a display supplying attitude and flight path information. The nature of the information provided by motion was examined. The mean fixation times for the no motion condition were significantly longer than for either motion condition, while the two motion conditions did not differ. It is shown that motion serves an alerting function, providing a cue or clue to the pilot that something happened. It is suggested that simulation without motion cues may represent an understatement of the true capacity of the pilot.

  7. An unbiased measure of the contributions of chroma and luminance to saccadic suppression of displacement.

    PubMed

    Anand, Sulekha; Bridgeman, Bruce

    2002-02-01

    Perception of image displacement is suppressed during saccadic eye movements. We probed the source of saccadic suppression of displacement by testing whether it selectively affects chromatic- or luminance-based motion information. Human subjects viewed a stimulus in which chromatic and luminance cues provided conflicting information about displacement direction. Apparent motion occurred during either fixation or a 19.5 degree saccade. Subjects detected motion and discriminated displacement direction in each trial. They reported motion in over 90% of fixation trials and over 70% of saccade trials. During fixation, the probability of perceiving the direction carried by chromatic cues decreased as luminance contrast increased. During saccades, subjects tended to perceive the direction indicated by luminance cues when luminance contrast was high. However, when luminance contrast was low, subjects showed no preference for the chromatic- or luminance-based direction. Thus magnocellular channels are suppressed, while stimulation of parvocellular channels is below threshold, so that neither channel drives motion perception during saccades. These results confirm that magnocellular inhibition is the source of saccadic suppression.

  8. Geometric figure–ground cues override standard depth from accretion-deletion

    PubMed Central

    Tanrıkulu, Ömer Dağlar; Froyen, Vicky; Feldman, Jacob; Singh, Manish

    2016-01-01

    Accretion-deletion is widely considered a decisive cue to surface depth ordering, with the accreting or deleting surface interpreted as behind an adjoining surface. However, Froyen, Feldman, and Singh (2013) have shown that when accretion-deletion occurs on both sides of a contour, accreting-deleting regions can also be perceived as in front and as self-occluding due to rotation in three dimensions. In this study we ask whether geometric figure–ground cues can override the traditional “depth from accretion-deletion” interpretation even when accretion-deletion takes place only on one side of a contour. We used two tasks: a relative-depth task (front/back), and a motion-classification task (translation/rotation). We conducted two experiments, in which texture in only one set of alternating regions was moving; the other set was static. Contrary to the traditional interpretation of accretion-deletion, the moving convex and symmetric regions were perceived as figural and rotating in three dimensions in roughly half of the trials. In the second experiment, giving different motion directions to the moving regions (thereby weakening motion-based grouping) further weakened the traditional accretion-deletion interpretation. Our results show that the standard “depth from accretion-deletion” interpretation is overridden by static geometric cues to figure–ground. Overall, the results demonstrate a rich interaction between accretion-deletion, figure–ground, and structure from motion that is not captured by existing models of depth from motion. PMID:26982528

  9. Pilot-Induced Oscillation Prediction With Three Levels of Simulation Motion Displacement

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffery A.; Chung, William W. Y.; Tran, Duc T.; Laforce, Soren; Bengford, Norman J.

    2001-01-01

    Simulator motion platform characteristics were examined to determine if the amount of motion affects pilot-induced oscillation (PIO) prediction. Five test pilots evaluated how susceptible 18 different sets of pitch dynamics were to PIOs with three different levels of simulation motion platform displacement: large, small, and none. The pitch dynamics were those of a previous in-flight experiment, some of which elicited PIOs These in-flight results served as truth data for the simulation. As such, the in-flight experiment was replicated as much as possible. Objective and subjective data were collected and analyzed With large motion, PIO and handling qualities ratings matched the flight data more closely than did small motion or no motion. Also, regardless of the aircraft dynamics, large motion increased pilot confidence in assigning handling qualifies ratings, reduced safety pilot trips, and lowered touchdown velocities. While both large and small motion provided a pitch rate cue of high fidelity, only large motion presented the pilot with a high fidelity vertical acceleration cue.

  10. Task and vehicle dynamics based assessment of motion cueing requirements

    DOT National Transportation Integrated Search

    2004-08-16

    One significant difference between real and simulated flight on the ground are the stimuli or cues provided to the pilot. Due to physical and/or cost constraints, it is nearly impossible to match all the cues experienced in the air in ground-based si...

  11. Simplified bionic solutions: a simple bio-inspired vehicle collision detection system.

    PubMed

    Hartbauer, Manfred

    2017-02-15

    Modern cars are equipped with both active and passive sensor systems that can detect potential collisions. In contrast, locusts avoid collisions solely by responding to certain visual cues that are associated with object looming. In neurophysiological experiments, I investigated the possibility that the 'collision-detector neurons' of locusts respond to impending collisions in films recorded with dashboard cameras of fast driving cars. In a complementary modelling approach, I developed a simple algorithm to reproduce the neuronal response that was recorded during object approach. Instead of applying elaborate algorithms that factored in object recognition and optic flow discrimination, I tested the hypothesis that motion detection restricted to a 'danger zone', in which frontal collisions on the motorways are most likely, is sufficient to estimate the risk of a collision. Furthermore, I investigated whether local motion vectors, obtained from the differential excitation of simulated direction-selective networks, could be used to predict evasive steering maneuvers and prevent undesired responses to motion artifacts. The results of the study demonstrate that the risk of impending collisions in real traffic scenes is mirrored in the excitation of the collision-detecting neuron (DCMD) of locusts. The modelling approach was able to reproduce this neuronal response even when the vehicle was driving at high speeds and image resolution was low (about 200  ×  100 pixels). Furthermore, evasive maneuvers that involved changing the steering direction and steering force could be planned by comparing the differences in the overall excitation levels of the simulated right and left direction-selective networks. Additionally, it was possible to suppress undesired responses of the algorithm to translatory movements, camera shake and ground shadows by evaluating local motion vectors. These estimated collision risk values and evasive steering vectors could be used as input for a driving assistant, converting the first into braking force and the latter into steering responses to avoid collisions. Since many processing steps were computed on the level of pixels and involved elements of direction-selective networks, this algorithm can be implemented in hardware so that parallel computations enhance the processing speed significantly.

  12. Simplified bionic solutions: a simple bio-inspired vehicle collision detection system

    PubMed Central

    Hartbauer, Manfred

    2018-01-01

    Modern cars are equipped with both active and passive sensor systems that can detect potential collisions. In contrast, locusts avoid collisions solely by responding to certain visual cues that are associated with object looming. In neurophysiological experiments, I investigated the possibility that the ‘collision-detector neurons’ of locusts respond to impending collisions in films recorded with dashboard cameras of fast driving cars. In a complementary modelling approach, I developed a simple algorithm to reproduce the neuronal response that was recorded during object approach. Instead of applying elaborate algorithms that factored in object recognition and optic flow discrimination, I tested the hypothesis that motion detection restricted to a ‘danger zone’, in which frontal collisions on the motorways are most likely, is sufficient to estimate the risk of a collision. Furthermore, I investigated whether local motion vectors, obtained from the differential excitation of simulated direction-selective networks, could be used to predict evasive steering maneuvers and prevent undesired responses to motion artifacts. The results of the study demonstrate that the risk of impending collisions in real traffic scenes is mirrored in the excitation of the collision-detecting neuron (DCMD) of locusts. The modelling approach was able to reproduce this neuronal response even when the vehicle was driving at high speeds and image resolution was low (about 200 × 100 pixels). Furthermore, evasive maneuvers that involved changing the steering direction and steering force could be planned by comparing the differences in the overall excitation levels of the simulated right and left direction-selective networks. Additionally, it was possible to suppress undesired responses of the algorithm to translatory movements, camera shake and ground shadows by evaluating local motion vectors. These estimated collision risk values and evasive steering vectors could be used as input for a driving assistant, converting the first into braking force and the latter into steering responses to avoid collisions. Since many processing steps were computed on the level of pixels and involved elements of direction-selective networks, this algorithm can be implemented in hardware so that parallel computations enhance the processing speed significantly. PMID:28091394

  13. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  14. Anticipatory Smooth Eye Movements in Autism Spectrum Disorder

    PubMed Central

    Aitkin, Cordelia D.; Santos, Elio M.; Kowler, Eileen

    2013-01-01

    Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations. PMID:24376667

  15. Anticipatory smooth eye movements in autism spectrum disorder.

    PubMed

    Aitkin, Cordelia D; Santos, Elio M; Kowler, Eileen

    2013-01-01

    Smooth pursuit eye movements are important for vision because they maintain the line of sight on targets that move smoothly within the visual field. Smooth pursuit is driven by neural representations of motion, including a surprisingly strong influence of high-level signals representing expected motion. We studied anticipatory smooth eye movements (defined as smooth eye movements in the direction of expected future motion) produced by salient visual cues in a group of high-functioning observers with Autism Spectrum Disorder (ASD), a condition that has been associated with difficulties in either generating predictions, or translating predictions into effective motor commands. Eye movements were recorded while participants pursued the motion of a disc that moved within an outline drawing of an inverted Y-shaped tube. The cue to the motion path was a visual barrier that blocked the untraveled branch (right or left) of the tube. ASD participants showed strong anticipatory smooth eye movements whose velocity was the same as that of a group of neurotypical participants. Anticipatory smooth eye movements appeared on the very first cued trial, indicating that trial-by-trial learning was not responsible for the responses. These results are significant because they show that anticipatory capacities are intact in high-functioning ASD in cases where the cue to the motion path is highly salient and unambiguous. Once the ability to generate anticipatory pursuit is demonstrated, the study of the anticipatory responses with a variety of types of cues provides a window into the perceptual or cognitive processes that underlie the interpretation of events in natural environments or social situations.

  16. [Visual cuing effect for haptic angle judgment].

    PubMed

    Era, Ataru; Yokosawa, Kazuhiko

    2009-08-01

    We investigated whether visual cues are useful for judging haptic angles. Participants explored three-dimensional angles with a virtual haptic feedback device. For visual cues, we use a location cue, which synchronizes haptic exploration, and a space cue, which specifies the haptic space. In Experiment 1, angles were judged more correctly with both cues, but were overestimated with a location cue only. In Experiment 2, the visual cues emphasized depth, and overestimation with location cues occurred, but space cues had no influence. The results showed that (a) when both cues are presented, haptic angles are judged more correctly. (b) Location cues facilitate only motion information, and not depth information. (c) Haptic angles are apt to be overestimated when there is both haptic and visual information.

  17. Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clement, G. R.; Harm, D L.; Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    2005-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Our general hypothesis is that the central nervous system utilizes both multi-sensory integration and frequency segregation as neural strategies to resolve the ambiguity of tilt and translation stimuli. Movement in an altered gravity environment, such as weightlessness without a stable gravity reference, results in new patterns of sensory cues. For example, the semicircular canals, vision and neck proprioception provide information about head tilt on orbit without the normal otolith head-tilt position that is omnipresent on Earth. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth s gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of disorientation and tilt-translation disturbances reported by crewmembers during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.

  18. Sensorimotor Adaptation Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Clement, G. R.; Harm, D. L.; Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    2005-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. Our general hypothesis is that the central nervous system utilizes both multi-sensory integration and frequency segregation as neural strategies to resolve the ambiguity of tilt and translation stimuli. Movement in an altered gravity environment, such as weightlessness without a stable gravity reference, results in new patterns of sensory cues. For example, the semicircular canals, vision and neck proprioception provide information about head tilt on orbit without the normal otolith head-tilt position that is omnipresent on Earth. Adaptive changes in how inertial cues from the otolith system are integrated with other sensory information lead to perceptual and postural disturbances upon return to Earth's gravity. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of disorientation and tilt-translation disturbances reported by crewmembers during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation during tilt and translation motion.

  19. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  20. Visual Depth from Motion Parallax and Eye Pursuit

    PubMed Central

    Stroyan, Keith; Nawrot, Mark

    2012-01-01

    A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531

  1. Movement cues aid face recognition in developmental prosopagnosia.

    PubMed

    Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah

    2015-11-01

    Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).

  2. Stereo-motion cooperation and the use of motion disparity in the visual perception of 3-D structure.

    PubMed

    Cornilleau-Pérès, V; Droulez, J

    1993-08-01

    When an observer views a moving scene binocularly, both motion parallax and binocular disparity provide depth information. In Experiments 1A-1C, we measured sensitivity to surface curvature when these depth cues were available either individually or simultaneously. When the depth cues yielded comparable sensitivity to surface curvature, we found that curvature detection was easier with the cues present simultaneously, rather than individually. For 2 of the 6 subjects, this effect was stronger when the component of frontal translation of the surface was vertical, rather than horizontal. No such anisotropy was found for the 4 other subjects. If a moving object is observed binocularly, the patterns of optic flow are different on the left and right retinae. We have suggested elsewhere (Cornilleau-Pérès & Droulez, in press) that this motion disparity might be used as a visual cue for the perception of a 3-D structure. Our model consisted in deriving binocular disparity from the left and right distributions of vertical velocities, rather than from luminous intensities, as has been done in classical studies on stereoscopic vision. The model led to some predictions concerning the detection of surface curvature from motion disparity in the presence or absence of intensity-based disparity (classically termed binocular disparity). In a second set of experiments, we attempted to test these predictions, and we failed to validate our theoretical scheme from a physiological point of view.

  3. Psychophysical evidence for auditory motion parallax.

    PubMed

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  4. Active Segmentation.

    PubMed

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  5. A novel mechanism for mechanosensory-based rheotaxis in larval zebrafish

    PubMed Central

    Oteiza, Pablo; Odstrcil, Iris; Lauder, George; Portugues, Ruben; Engert, Florian

    2017-01-01

    When flying or swimming, animals must adjust their own movement to compensate for displacements induced by the flow of the surrounding air or water1. These flow-induced displacements can most easily be detected as visual whole-field motion with respect to the animal’s frame of reference2. In spite of this, many aquatic animals consistently orient and swim against oncoming flows (a behavior known as rheotaxis) even in the absence of visual cues3,4. How animals achieve this task, and its underlying sensory basis, is still unknown. Here we show that in the absence of visual information, larval zebrafish (Danio rerio) perform rheotaxis by using flow velocity gradients as navigational cues. We present behavioral data that support a novel algorithm based on such local velocity gradients that fish use to efficiently avoid getting dragged by flowing water. Specifically, we show that fish use their mechanosensory lateral line to first sense the curl (or vorticity) of the local velocity vector field to detect the presence of flow and, second, measure its temporal change following swim bouts to deduce flow direction. These results reveal an elegant navigational strategy based on the sensing of flow velocity gradients and provide a comprehensive behavioral algorithm, also applicable for robotic design, that generalizes to a wide range of animal behaviors in moving fluids. PMID:28700578

  6. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    PubMed

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  7. Near real-time, on-the-move software PED using VPEF

    NASA Astrophysics Data System (ADS)

    Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane

    2015-05-01

    The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.

  8. Objective Fidelity Evaluation in Multisensory Virtual Environments: Auditory Cue Fidelity in Flight Simulation

    PubMed Central

    Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.

    2012-01-01

    We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068

  9. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  10. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  11. A novel role for visual perspective cues in the neural computation of depth.

    PubMed

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  12. Self-motion facilitates echo-acoustic orientation in humans

    PubMed Central

    Wallmeier, Ludwig; Wiegrebe, Lutz

    2014-01-01

    The ability of blind humans to navigate complex environments through echolocation has received rapidly increasing scientific interest. However, technical limitations have precluded a formal quantification of the interplay between echolocation and self-motion. Here, we use a novel virtual echo-acoustic space technique to formally quantify the influence of self-motion on echo-acoustic orientation. We show that both the vestibular and proprioceptive components of self-motion contribute significantly to successful echo-acoustic orientation in humans: specifically, our results show that vestibular input induced by whole-body self-motion resolves orientation-dependent biases in echo-acoustic cues. Fast head motions, relative to the body, provide additional proprioceptive cues which allow subjects to effectively assess echo-acoustic space referenced against the body orientation. These psychophysical findings clearly demonstrate that human echolocation is well suited to drive precise locomotor adjustments. Our data shed new light on the sensory–motor interactions, and on possible optimization strategies underlying echolocation in humans. PMID:26064556

  13. Static and Motion-Based Visual Features Used by Airport Tower Controllers: Some Implications for the Design of Remote or Virtual Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion B.

    2011-01-01

    Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.

  14. Interactions between target location and reward size modulate the rate of microsaccades in monkeys

    PubMed Central

    Tokiyama, Stefanie; Lisberger, Stephen G.

    2015-01-01

    We have studied how rewards modulate the occurrence of microsaccades by manipulating the size of an expected reward and the location of the cue that sets the expectations for future reward. We found an interaction between the size of the reward and the location of the cue. When monkeys fixated on a cue that signaled the size of future reward, the frequency of microsaccades was higher if the monkey expected a large vs. a small reward. When the cue was presented at a site in the visual field that was remote from the position of fixation, reward size had the opposite effect: the frequency of microsaccades was lower when the monkey was expecting a large reward. The strength of pursuit initiation also was affected by reward size and by the presence of microsaccades just before the onset of target motion. The gain of pursuit initiation increased with reward size and decreased when microsaccades occurred just before or after the onset of target motion. The effect of the reward size on pursuit initiation was much larger than any indirect effects reward might cause through modulation of the rate of microsaccades. We found only a weak relationship between microsaccade direction and the location of the exogenous cue relative to fixation position, even in experiments where the location of the cue indicated the direction of target motion. Our results indicate that the expectation of reward is a powerful modulator of the occurrence of microsaccades, perhaps through attentional mechanisms. PMID:26311180

  15. Normal aging affects movement execution but not visual motion working memory and decision-making delay during cue-dependent memory-based smooth-pursuit.

    PubMed

    Fukushima, Kikuro; Barnes, Graham R; Ito, Norie; Olley, Peter M; Warabi, Tateo

    2014-07-01

    Aging affects virtually all functions including sensory/motor and cognitive activities. While retinal image motion is the primary input for smooth-pursuit, its efficiency/accuracy depends on cognitive processes. Elderly subjects exhibit gain decrease during initial and steady-state pursuit, but reports on latencies are conflicting. Using a cue-dependent memory-based smooth-pursuit task, we identified important extra-retinal mechanisms for initial pursuit in young adults including cue information priming and extra-retinal drive components (Ito et al. in Exp Brain Res 229:23-35, 2013). We examined aging effects on parameters for smooth-pursuit using the same tasks. Elderly subjects were tested during three task conditions as previously described: memory-based pursuit, simple ramp-pursuit just to follow motion of a single spot, and popping-out of the correct spot during memory-based pursuit to enhance retinal image motion. Simple ramp-pursuit was used as a task that did not require visual motion working memory. To clarify aging effects, we then compared the results with the previous young subject data. During memory-based pursuit, elderly subjects exhibited normal working memory of cue information. Most movement-parameters including pursuit latencies differed significantly between memory-based pursuit and simple ramp-pursuit and also between young and elderly subjects. Popping-out of the correct spot motion was ineffective for enhancing initial pursuit in elderly subjects. However, the latency difference between memory-based pursuit and simple ramp-pursuit in individual subjects, which includes decision-making delay in the memory task, was similar between the two groups. Our results suggest that smooth-pursuit latencies depend on task conditions and that, although the extra-retinal mechanisms were functional for initial pursuit in elderly subjects, they were less effective.

  16. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  17. The Role of Visual Cues in Microgravity Spatial Orientation

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.

  18. A Class of Visual Neurons with Wide-Field Properties Is Required for Local Motion Detection.

    PubMed

    Fisher, Yvette E; Leong, Jonathan C S; Sporar, Katja; Ketkar, Madhura D; Gohl, Daryl M; Clandinin, Thomas R; Silies, Marion

    2015-12-21

    Visual motion cues are used by many animals to guide navigation across a wide range of environments. Long-standing theoretical models have made predictions about the computations that compare light signals across space and time to detect motion. Using connectomic and physiological approaches, candidate circuits that can implement various algorithmic steps have been proposed in the Drosophila visual system. These pathways connect photoreceptors, via interneurons in the lamina and the medulla, to direction-selective cells in the lobula and lobula plate. However, the functional architecture of these circuits remains incompletely understood. Here, we use a forward genetic approach to identify the medulla neuron Tm9 as critical for motion-evoked behavioral responses. Using in vivo calcium imaging combined with genetic silencing, we place Tm9 within motion-detecting circuitry. Tm9 receives functional inputs from the lamina neurons L3 and, unexpectedly, L1 and passes information onto the direction-selective T5 neuron. Whereas the morphology of Tm9 suggested that this cell would inform circuits about local points in space, we found that the Tm9 spatial receptive field is large. Thus, this circuit informs elementary motion detectors about a wide region of the visual scene. In addition, Tm9 exhibits sustained responses that provide a tonic signal about incoming light patterns. Silencing Tm9 dramatically reduces the response amplitude of T5 neurons under a broad range of different motion conditions. Thus, our data demonstrate that sustained and wide-field signals are essential for elementary motion processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task Setting

    DTIC Science & Technology

    2008-05-01

    AFRL-RH-WP-SR-2009-0002 The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task...and Performance in a Roll-axis Compensatory Tracking Task Setting 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62202F 6. AUTHOR(S...simulated vehicle having aircraft-like dynamics. A centrally located compensatory display, subtending about nine degrees, provided visual roll error

  20. Experiments in sensing transient rotational acceleration cues on a flight simulator

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1979-01-01

    Results are presented for two transient motion sensing experiments which were motivated by the identification of an anomalous roll cue (a 'jerk' attributed to an acceleration spike) in a prior investigation of realistic fighter motion simulation. The experimental results suggest the consideration of several issues for motion washout and challenge current sensory system modeling efforts. Although no sensory modeling effort is made it is argued that such models must incorporate the ability to handle transient inputs of short duration (some of which are less than the accepted latency times for sensing), and must represent separate channels for rotational acceleration and velocity sensing.

  1. Space motion sickness monitoring experiment - Spacelab 1

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Lichtenberg, Byron K.; Money, Kenneth E.

    1990-01-01

    A detailed firsthand report on symptoms and signs of space motion sickness and fluid shift observed by four specially trained crewmembers during Shuttle/Spacelab 1, launched on November 28, 1983 is presented. Results show that three crewmen experienced persistent overall discomfort and vomited repeatedly. Symptom pattern was generally similar to that seen in the individuals preflight, except that prodromalnausea was brief or absent in some cases. Symptoms were clearly modulated by head movement, were exacerbated by unfamiliar visual cues, and could be reduced by physical restraint providing contact cues around the body. The results support the view that space sickness is a form of motion sickness.

  2. Relationship between selected orientation rest frame, circular vection and space motion sickness

    NASA Technical Reports Server (NTRS)

    Harm, D. L.; Parker, D. E.; Reschke, M. F.; Skinner, N. C.

    1998-01-01

    Space motion sickness (SMS) and spatial orientation and motion perception disturbances occur in 70-80% of astronauts. People select "rest frames" to create the subjective sense of spatial orientation. In microgravity, the astronaut's rest frame may be based on visual scene polarity cues and on the internal head and body z axis (vertical body axis). The data reported here address the following question: Can an astronaut's orientation rest frame be related and described by other variables including circular vection response latencies and space motion sickness? The astronaut's microgravity spatial orientation rest frames were determined from inflight and postflight verbal reports. Circular vection responses were elicited by rotating a virtual room continuously at 35 degrees/s in pitch, roll and yaw with respect to the astronaut. Latency to the onset of vection was recorded from the time the crew member opened their eyes to the onset of vection. The astronauts who used visual cues exhibited significantly shorter vection latencies than those who used internal z axis cues. A negative binomial regression model was used to represent the observed total SMS symptom scores for each subject for each flight day. Orientation reference type had a significant effect, resulting in an estimated three-fold increase in the expected motion sickness score on flight day 1 for astronauts who used visual cues. The results demonstrate meaningful classification of astronauts' rest frames and their relationships to sensitivity to circular vection and SMS. Thus, it may be possible to use vection latencies to predict SMS severity and duration.

  3. A simple approach to ignoring irrelevant variables by population decoding based on multisensory neurons

    PubMed Central

    Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.

    2016-01-01

    Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948

  4. Multisensory Self-Motion Compensation During Object Trajectory Judgments

    PubMed Central

    Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.

    2015-01-01

    Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317

  5. Multimodal Pilot Behavior in Multi-Axis Tracking Tasks with Time-Varying Motion Cueing Gains

    NASA Technical Reports Server (NTRS)

    Zaal, P. M. T; Pool, D. M.

    2014-01-01

    In a large number of motion-base simulators, adaptive motion filters are utilized to maximize the use of the available motion envelope of the motion system. However, not much is known about how the time-varying characteristics of such adaptive filters affect pilots when performing manual aircraft control. This paper presents the results of a study investigating the effects of time-varying motion filter gains on pilot control behavior and performance. An experiment was performed in a motion-base simulator where participants performed a simultaneous roll and pitch tracking task, while the roll and/or pitch motion filter gains changed over time. Results indicate that performance increases over time with increasing motion gains. This increase is a result of a time-varying adaptation of pilots' equalization dynamics, characterized by increased visual and motion response gains and decreased visual lead time constants. Opposite trends are found for decreasing motion filter gains. Even though the trends in both controlled axes are found to be largely the same, effects are less significant in roll. In addition, results indicate minor cross-coupling effects between pitch and roll, where a cueing variation in one axis affects the behavior adopted in the other axis.

  6. Control of self-motion in dynamic fluids: fish do it differently from bees.

    PubMed

    Scholtyssek, Christine; Dacke, Marie; Kröger, Ronald; Baird, Emily

    2014-05-01

    To detect and avoid collisions, animals need to perceive and control the distance and the speed with which they are moving relative to obstacles. This is especially challenging for swimming and flying animals that must control movement in a dynamic fluid without reference from physical contact to the ground. Flying animals primarily rely on optic flow to control flight speed and distance to obstacles. Here, we investigate whether swimming animals use similar strategies for self-motion control to flying animals by directly comparing the trajectories of zebrafish (Danio rerio) and bumblebees (Bombus terrestris) moving through the same experimental tunnel. While moving through the tunnel, black and white patterns produced (i) strong horizontal optic flow cues on both walls, (ii) weak horizontal optic flow cues on both walls and (iii) strong optic flow cues on one wall and weak optic flow cues on the other. We find that the mean speed of zebrafish does not depend on the amount of optic flow perceived from the walls. We further show that zebrafish, unlike bumblebees, move closer to the wall that provides the strongest visual feedback. This unexpected preference for strong optic flow cues may reflect an adaptation for self-motion control in water or in environments where visibility is limited. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  8. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  9. Impaired smooth-pursuit in Parkinson's disease: normal cue-information memory, but dysfunction of extra-retinal mechanisms for pursuit preparation and execution

    PubMed Central

    Fukushima, Kikuro; Ito, Norie; Barnes, Graham R; Onishi, Sachiyo; Kobayashi, Nobuyoshi; Takei, Hidetoshi; Olley, Peter M; Chiba, Susumu; Inoue, Kiyoharu; Warabi, Tateo

    2015-01-01

    While retinal image motion is the primary input for smooth-pursuit, its efficiency depends on cognitive processes including prediction. Reports are conflicting on impaired prediction during pursuit in Parkinson's disease. By separating two major components of prediction (image motion direction memory and movement preparation) using a memory-based pursuit task, and by comparing tracking eye movements with those during a simple ramp-pursuit task that did not require visual memory, we examined smooth-pursuit in 25 patients with Parkinson's disease and compared the results with 14 age-matched controls. In the memory-based pursuit task, cue 1 indicated visual motion direction, whereas cue 2 instructed the subjects to prepare to pursue or not to pursue. Based on the cue-information memory, subjects were asked to pursue the correct spot from two oppositely moving spots or not to pursue. In 24/25 patients, the cue-information memory was normal, but movement preparation and execution were impaired. Specifically, unlike controls, most of the patients (18/24 = 75%) lacked initial pursuit during the memory task and started tracking the correct spot by saccades. Conversely, during simple ramp-pursuit, most patients (83%) exhibited initial pursuit. Popping-out of the correct spot motion during memory-based pursuit was ineffective for enhancing initial pursuit. The results were similar irrespective of levodopa/dopamine agonist medication. Our results indicate that the extra-retinal mechanisms of most patients are dysfunctional in initiating memory-based (not simple ramp) pursuit. A dysfunctional pursuit loop between frontal eye fields (FEF) and basal ganglia may contribute to the impairment of extra-retinal mechanisms, resulting in deficient pursuit commands from the FEF to brainstem. PMID:25825544

  10. Interocular velocity difference contributes to stereomotion speed perception

    NASA Technical Reports Server (NTRS)

    Brooks, Kevin R.

    2002-01-01

    Two experiments are presented assessing the contributions of the rate of change of disparity (CD) and interocular velocity difference (IOVD) cues to stereomotion speed perception. Using a two-interval forced-choice paradigm, the perceived speed of directly approaching and receding stereomotion and of monocular lateral motion in random dot stereogram (RDS) targets was measured. Prior adaptation using dysjunctively moving random dot stimuli induced a velocity aftereffect (VAE). The degree of interocular correlation in the adapting images was manipulated to assess the effectiveness of each cue. While correlated adaptation involved a conventional RDS stimulus, containing both IOVD and CD cues, uncorrelated adaptation featured an independent dot array in each monocular half-image, and hence lacked a coherent disparity signal. Adaptation produced a larger VAE for stereomotion than for monocular lateral motion, implying effects at neural sites beyond that of binocular combination. For motion passing through the horopter, correlated and uncorrelated adaptation stimuli produced equivalent stereomotion VAEs. The possibility that these results were due to the adaptation of a CD mechanism through random matches in the uncorrelated stimulus was discounted in a control experiment. Here both simultaneous and sequential adaptation of left and right eyes produced similar stereomotion VAEs. Motion at uncrossed disparities was also affected by both correlated and uncorrelated adaptation stimuli, but showed a significantly greater VAE in response to the former. These results show that (1) there are two separate, specialised mechanisms for encoding stereomotion: one through IOVD, the other through CD; (2) the IOVD cue dominates the perception of stereomotion speed for stimuli passing through the horopter; and (3) at a disparity pedestal both the IOVD and the CD cues have a significant influence.

  11. Enhancing visual search abilities of people with intellectual disabilities.

    PubMed

    Li-Tsang, Cecilia W P; Wong, Jackson K K

    2009-01-01

    This study aimed to evaluate the effects of cueing in visual search paradigm for people with and without intellectual disabilities (ID). A total of 36 subjects (18 persons with ID and 18 persons with normal intelligence) were recruited using convenient sampling method. A series of experiments were conducted to compare guided cue strategies using either motion contrast or additional cue to basic search task. Repeated measure ANOVA and post hoc multiple comparison tests were used to compare each cue strategy. Results showed that the use of guided strategies was able to capture focal attention in an autonomic manner in the ID group (Pillai's Trace=5.99, p<0.0001). Both guided cue and guided motion search tasks demonstrated functionally similar effects that confirmed the non-specific character of salience. These findings suggested that the visual search efficiency of people with ID was greatly improved if the target was made salient using cueing effect when the complexity of the display increased (i.e. set size increased). This study could have an important implication for the design of the visual searching format of any computerized programs developed for people with ID in learning new tasks.

  12. Technology evaluation of man-rated acceleration test equipment for vestibular research

    NASA Technical Reports Server (NTRS)

    Taback, I.; Kenimer, R. L.; Butterfield, A. J.

    1983-01-01

    The considerations for eliminating acceleration noise cues in horizontal, linear, cyclic-motion sleds intended for both ground and shuttle-flight applications are addressed. the principal concerns are the acceleration transients associated with change in direction-of-motion for the carriage. The study presents a design limit for acceleration cues or transients based upon published measurements for thresholds of human perception to linear cyclic motion. The sources and levels for motion transients are presented based upon measurements obtained from existing sled systems. The approaches to a noise-free system recommends the use of air bearings for the carriage support and moving-coil linear induction motors operating at low frequency as the drive system. Metal belts running on air bearing pulleys provide an alternate approach to the driving system. The appendix presents a discussion of alternate testing techniques intended to provide preliminary type data by means of pendulums, linear motion devices and commercial air bearing tables.

  13. Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction

    PubMed Central

    Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick

    2016-01-01

    Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907

  14. Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    PubMed Central

    Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko

    2008-01-01

    Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613

  15. A novel role for visual perspective cues in the neural computation of depth

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.

    2014-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667

  16. Impaired Velocity Processing Reveals an Agnosia for Motion in Depth.

    PubMed

    Barendregt, Martijn; Dumoulin, Serge O; Rokers, Bas

    2016-11-01

    Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers' ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion. © The Author(s) 2016.

  17. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    PubMed

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Implicit Learning of Viewpoint-Independent Spatial Layouts

    PubMed Central

    Tsuchiai, Taiga; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2012-01-01

    We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations. PMID:22740837

  19. He throws like a girl (but only when he's sad): emotion affects sex-decoding of biological motion displays.

    PubMed

    Johnson, Kerri L; McKay, Lawrie S; Pollick, Frank E

    2011-05-01

    Gender stereotypes have been implicated in sex-typed perceptions of facial emotion. Such interpretations were recently called into question because facial cues of emotion are confounded with sexually dimorphic facial cues. Here we examine the role of visual cues and gender stereotypes in perceptions of biological motion displays, thus overcoming the morphological confounding inherent in facial displays. In four studies, participants' judgments revealed gender stereotyping. Observers accurately perceived emotion from biological motion displays (Study 1), and this affected sex categorizations. Angry displays were overwhelmingly judged to be men; sad displays were judged to be women (Studies 2-4). Moreover, this pattern remained strong when stimuli were equated for velocity (Study 3). We argue that these results were obtained because perceivers applied gender stereotypes of emotion to infer sex category (Study 4). Implications for both vision sciences and social psychology are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Contribution of intravestibular sensory conflict to motion sickness and dizziness in migraine disorders.

    PubMed

    Wang, Joanne; Lewis, Richard F

    2016-10-01

    Migraine is associated with enhanced motion sickness susceptibility and can cause episodic vertigo [vestibular migraine (VM)], but the mechanisms relating migraine to these vestibular symptoms remain uncertain. We tested the hypothesis that the central integration of rotational cues (from the semicircular canals) and gravitational cues (from the otolith organs) is abnormal in migraine patients. A postrotational tilt paradigm generated a conflict between canal cues (which indicate the head is rotating) and otolith cues (which indicate the head is tilted and stationary), and eye movements were measured to quantify two behaviors that are thought to minimize this conflict: suppression and reorientation of the central angular velocity signal, evidenced by attenuation ("dumping") of the vestibuloocular reflex and shifting of the rotational axis of the vestibuloocular reflex toward the earth vertical. We found that normal and migraine subjects, but not VM patients, displayed an inverse correlation between the extent of dumping and the size of the axis shift such that the net "conflict resolution" mediated through these two mechanisms approached an optimal value and that the residual sensory conflict in VM patients (but not migraine or normal subjects) correlated with motion sickness susceptibility. Our findings suggest that the brain normally controls the dynamic and spatial characteristics of central vestibular signals to minimize intravestibular sensory conflict and that this process is disrupted in VM, which may be responsible for the enhance motion intolerance and episodic vertigo that characterize this disorder. Copyright © 2016 the American Physiological Society.

  1. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Gender recognition depends on type of movement and motor skill. Analyzing and perceiving biological motion in musical and nonmusical tasks.

    PubMed

    Wöllner, Clemens; Deconinck, Frederik J A

    2013-05-01

    Gender recognition in point-light displays was investigated with regard to body morphology cues and motion cues of human motion performed with different levels of technical skill. Gestures of male and female orchestral conductors were recorded with a motion capture system while they conducted excerpts from a Mendelssohn string symphony to musicians. Point-light displays of conductors were presented to observers under the following conditions: visual-only, auditory-only, audiovisual, and two non-conducting conditions (walking and static images). Observers distinguished between male and female conductors in gait and static images, but not in visual-only and auditory-only conducting conditions. Across all conductors, gender recognition for audiovisual stimuli was better than chance, yet significantly less reliable than for gait. Separate analyses for two groups of conductors indicated an expertise effect in that novice conductors' gender was perceived above chance level for visual-only and audiovisual conducting, while skilled conducting gestures of experts did not afford gender-specific cues. In these conditions, participants may have ignored the body morphology cues that led to correct judgments for static images. Results point to a response bias such that conductors were more often judged to be male. Thus judgment accuracy depended both on the conductors' level of expertise as well as on the observers' concepts, suggesting that perceivable differences between men and women may diminish for highly trained movements of experienced individuals. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Figure-ground assignment to a translating contour: a preference for advancing vs. receding motion.

    PubMed

    Barenholtz, Elan; Tarr, Michael J

    2009-05-28

    Past research on figure-ground assignment to contours has largely considered static stimuli. Here we report a simple and extremely robust dynamic cue to figural assignment, based on whether the bounding region of a contour is growing larger within the field of view ("advancing") rather than smaller ("receding"). Subjects viewed a straight or jagged contour dividing two colored regions translating behind a virtual aperture and had to report which color they had seen "moving in front", effectively assigning figure to that side of the contour. Across three experiments, subjects showed a strong preference to assign figure such that the bounded contour was advancing. This was true regardless of the direction of motion of the contour and regardless of the initial/ending size of the bounded regions (i.e., the motion cue served to override the conventional cue to figure-ground of smaller area). In a fourth, control experiment, subjects showed no such bias when it was the aperture, rather than the contour, that moved, demonstrating that the effect depends on contour motion and not simply an increase in area. We discuss a possible explanation for this bias as well as the general implications regarding dynamic factors in form perception.

  4. Coral reef soundscapes may not be detectable far from the reef.

    PubMed

    Kaplan, Maxwell B; Mooney, T Aran

    2016-08-23

    Biological sounds produced on coral reefs may provide settlement cues to marine larvae. Sound fields are composed of pressure and particle motion, which is the back and forth movement of acoustic particles. Particle motion (i.e., not pressure) is the relevant acoustic stimulus for many, if not most, marine animals. However, there have been no field measurements of reef particle motion. To address this deficiency, both pressure and particle motion were recorded at a range of distances from one Hawaiian coral reef at dawn and mid-morning on three separate days. Sound pressure attenuated with distance from the reef at dawn. Similar trends were apparent for particle velocity but with considerable variability. In general, average sound levels were low and perhaps too faint to be used as an orientation cue except very close to the reef. However, individual transient sounds that exceeded the mean values, sometimes by up to an order of magnitude, might be detectable far from the reef, depending on the hearing abilities of the larva. If sound is not being used as a long-range cue, it might still be useful for habitat selection or other biological activities within a reef.

  5. Coral reef soundscapes may not be detectable far from the reef

    PubMed Central

    Kaplan, Maxwell B.; Mooney, T. Aran

    2016-01-01

    Biological sounds produced on coral reefs may provide settlement cues to marine larvae. Sound fields are composed of pressure and particle motion, which is the back and forth movement of acoustic particles. Particle motion (i.e., not pressure) is the relevant acoustic stimulus for many, if not most, marine animals. However, there have been no field measurements of reef particle motion. To address this deficiency, both pressure and particle motion were recorded at a range of distances from one Hawaiian coral reef at dawn and mid-morning on three separate days. Sound pressure attenuated with distance from the reef at dawn. Similar trends were apparent for particle velocity but with considerable variability. In general, average sound levels were low and perhaps too faint to be used as an orientation cue except very close to the reef. However, individual transient sounds that exceeded the mean values, sometimes by up to an order of magnitude, might be detectable far from the reef, depending on the hearing abilities of the larva. If sound is not being used as a long-range cue, it might still be useful for habitat selection or other biological activities within a reef. PMID:27550394

  6. Coral reef soundscapes may not be detectable far from the reef

    NASA Astrophysics Data System (ADS)

    Kaplan, Maxwell B.; Mooney, T. Aran

    2016-08-01

    Biological sounds produced on coral reefs may provide settlement cues to marine larvae. Sound fields are composed of pressure and particle motion, which is the back and forth movement of acoustic particles. Particle motion (i.e., not pressure) is the relevant acoustic stimulus for many, if not most, marine animals. However, there have been no field measurements of reef particle motion. To address this deficiency, both pressure and particle motion were recorded at a range of distances from one Hawaiian coral reef at dawn and mid-morning on three separate days. Sound pressure attenuated with distance from the reef at dawn. Similar trends were apparent for particle velocity but with considerable variability. In general, average sound levels were low and perhaps too faint to be used as an orientation cue except very close to the reef. However, individual transient sounds that exceeded the mean values, sometimes by up to an order of magnitude, might be detectable far from the reef, depending on the hearing abilities of the larva. If sound is not being used as a long-range cue, it might still be useful for habitat selection or other biological activities within a reef.

  7. Transparent volume imaging

    NASA Astrophysics Data System (ADS)

    Wixson, Steve E.

    1990-07-01

    Transparent Volume Imaging began with the stereo xray in 1895 and ended for most investigators when radiation safety concerns eliminated the second view. Today, similiar images can be generated by the computer without safety hazards providing improved perception and new means of image quantification. A volumetric workstation is under development based on an operational prototype. The workstation consists of multiple symbolic and numeric processors, binocular stereo color display generator with large image memory and liquid crystal shutter, voice input and output, a 3D pointer that uses projection lenses so that structures in 3 space can be touched directly, 3D hard copy using vectograph and lenticular printing, and presentation facilities using stereo 35mm slide and stereo video tape projection. Volumetric software includes a volume window manager, Mayo Clinic's Analyze program and our Digital Stereo Microscope (DSM) algorithms. The DSM uses stereo xray-like projections, rapidly oscillating motion and focal depth cues such that detail can be studied in the spatial context of the entire set of data. Focal depth cues are generated with a lens and apeture algorithm that generates a plane of sharp focus, and multiple stereo pairs each with a different plane of sharp focus are generated and stored in the large memory for interactive selection using a physical or symbolic depth selector. More recent work is studying non-linear focussing. Psychophysical studies are underway to understand how people perce ive images on a volumetric display and how accurately 3 dimensional structures can be quantitated from these displays.

  8. Effects of Motion on Skill Acquisition in Future Simulators

    DTIC Science & Technology

    2006-05-01

    performed by Jacobs (1976) concentrated on transfer of training under different motion conditions. Researchers used participants with no prior flying... Autogenic feedback training exercise is superior to promethazine for the treatment of motion sickness. Journal of Clinical Pharmacology, 40, 1154 -1165...motion in simulation was examined. A particular focus was paid to research on the effects of motion cueing on transfer of training from both ground

  9. Motion versus position in the perception of head-centred movement.

    PubMed

    Freeman, Tom C A; Sumnall, Jane H

    2002-01-01

    Abstract. Observers can recover motion with respect to the head during an eye movement by comparing signals encoding retinal motion and the velocity of pursuit. Evidently there is a mismatch between these signals because perceived head-centred motion is not always veridical. One example is the Filehne illusion, in which a stationary object appears to move in the opposite direction to pursuit. Like the motion aftereffect, the phenomenal experience of the Filehne illusion is one in which the stimulus moves but does not seem to go anywhere. This raises problems when measuring the illusion by motion nulling because the more traditional technique confounds perceived motion with changes in perceived position. We devised a new nulling technique using global-motion stimuli that degraded familiar position cues but preserved cues to motion. Stimuli consisted of random-dot patterns comprising signal and noise dots that moved at the same retinal 'base' speed. Noise moved in random directions. In an eye-stationary speed-matching experiment we found noise slowed perceived retinal speed as 'coherence strength' (ie percentage of signal) was reduced. The effect occurred over the two-octave range of base speeds studied and well above direction threshold. When the same stimuli were combined with pursuit, observers were able to null the Filehne illusion by adjusting coherence. A power law relating coherence to retinal base speed fit the data well with a negative exponent. Eye-movement recordings showed that pursuit was quite accurate. We then tested the hypothesis that the stimuli found at the null-points appeared to move at the same retinal speed. Two observers supported the hypothesis, a third partially, and a fourth showed a small linear trend. In addition, the retinal speed found by the traditional Filehne technique was similar to the matches obtained with the global-motion stimuli. The results provide support for the idea that speed is the critical cue in head-centred motion perception.

  10. The effect of perceptual load on attention-induced motion blindness: the efficiency of selective inhibition.

    PubMed

    Hay, Julia L; Milders, Maarten M; Sahraie, Arash; Niedeggen, Michael

    2006-08-01

    Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target motion discrimination was significantly impaired, a result attributed to the carry-over of distractor inhibition. Increasing the difficulty of cue detection increased the motion target impairment, as distractor inhibition is thought to increase under demanding (high load) conditions in order to maximize selection efficiency. The apparent conflict with studies reporting reduced distractor inhibition under high load conditions was resolved by distinguishing between the effects of "cognitive" and "perceptual" load. ((c) 2006 APA, all rights reserved).

  11. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection.

    PubMed

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection "rich" cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection.

  13. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    PubMed Central

    Väljamäe, Aleksander; Sell, Sara

    2014-01-01

    In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection) is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV), focusing on participants imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types) and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD) when delivering via loudspeaker array. The significant differences in circular vection intensity showed that (1) AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; (2) ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and (3) individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection “rich” cues, i.e., acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in a similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensory induced vection. PMID:25520683

  14. Motion-guided attention promotes adaptive communications during social navigation.

    PubMed

    Lemasson, B H; Anderson, J J; Goodwin, R A

    2013-03-07

    Animals are capable of enhanced decision making through cooperation, whereby accurate decisions can occur quickly through decentralized consensus. These interactions often depend upon reliable social cues, which can result in highly coordinated activities in uncertain environments. Yet information within a crowd may be lost in translation, generating confusion and enhancing individual risk. As quantitative data detailing animal social interactions accumulate, the mechanisms enabling individuals to rapidly and accurately process competing social cues remain unresolved. Here, we model how motion-guided attention influences the exchange of visual information during social navigation. We also compare the performance of this mechanism to the hypothesis that robust social coordination requires individuals to numerically limit their attention to a set of n-nearest neighbours. While we find that such numerically limited attention does not generate robust social navigation across ecological contexts, several notable qualities arise from selective attention to motion cues. First, individuals can instantly become a local information hub when startled into action, without requiring changes in neighbour attention level. Second, individuals can circumvent speed-accuracy trade-offs by tuning their motion thresholds. In turn, these properties enable groups to collectively dampen or amplify social information. Lastly, the minority required to sway a group's short-term directional decisions can change substantially with social context. Our findings suggest that motion-guided attention is a fundamental and efficient mechanism underlying collaborative decision making during social navigation.

  15. Vestibular-visual interactions in flight simulators

    NASA Technical Reports Server (NTRS)

    Clark, B.

    1977-01-01

    The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.

  16. Cross-modal prediction changes the timing of conscious access during the motion-induced blindness.

    PubMed

    Chang, Acer Y C; Kanai, Ryota; Seth, Anil K

    2015-01-01

    Despite accumulating evidence that perceptual predictions influence perceptual content, the relations between these predictions and conscious contents remain unclear, especially for cross-modal predictions. We examined whether predictions of visual events by auditory cues can facilitate conscious access to the visual stimuli. We trained participants to learn associations between auditory cues and colour changes. We then asked whether congruency between auditory cues and target colours would speed access to consciousness. We did this by rendering a visual target subjectively invisible using motion-induced blindness and then gradually changing its colour while presenting congruent or incongruent auditory cues. Results showed that the visual target gained access to consciousness faster in congruent than in incongruent trials; control experiments excluded potentially confounding effects of attention and motor response. The expectation effect was gradually established over blocks suggesting a role for extensive training. Overall, our findings show that predictions learned through cross-modal training can facilitate conscious access to visual stimuli. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Self-recognition of avatar motion: how do I know it's me?

    PubMed

    Cook, Richard; Johnston, Alan; Heyes, Cecilia

    2012-02-22

    When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.

  18. Software for Project-Based Learning of Robot Motion Planning

    ERIC Educational Resources Information Center

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-01-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can…

  19. Differential responses in dorsal visual cortex to motion and disparity depth cues

    PubMed Central

    Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.

    2013-01-01

    We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808

  20. A novel speech processing algorithm based on harmonicity cues in cochlear implant

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Chen, Yousheng; Zhang, Zongping; Chen, Yan; Zhang, Weifeng

    2017-08-01

    This paper proposed a novel speech processing algorithm in cochlear implant, which used harmonicity cues to enhance tonal information in Mandarin Chinese speech recognition. The input speech was filtered by a 4-channel band-pass filter bank. The frequency ranges for the four bands were: 300-621, 621-1285, 1285-2657, and 2657-5499 Hz. In each pass band, temporal envelope and periodicity cues (TEPCs) below 400 Hz were extracted by full wave rectification and low-pass filtering. The TEPCs were modulated by a sinusoidal carrier, the frequency of which was fundamental frequency (F0) and its harmonics most close to the center frequency of each band. Signals from each band were combined together to obtain an output speech. Mandarin tone, word, and sentence recognition in quiet listening conditions were tested for the extensively used continuous interleaved sampling (CIS) strategy and the novel F0-harmonic algorithm. Results found that the F0-harmonic algorithm performed consistently better than CIS strategy in Mandarin tone, word, and sentence recognition. In addition, sentence recognition rate was higher than word recognition rate, as a result of contextual information in the sentence. Moreover, tone 3 and 4 performed better than tone 1 and tone 2, due to the easily identified features of the former. In conclusion, the F0-harmonic algorithm could enhance tonal information in cochlear implant speech processing due to the use of harmonicity cues, thereby improving Mandarin tone, word, and sentence recognition. Further study will focus on the test of the F0-harmonic algorithm in noisy listening conditions.

  1. Simulator study of the effect of visual-motion time delays on pilot tracking performance with an audio side task

    NASA Technical Reports Server (NTRS)

    Riley, D. R.; Miller, G. K., Jr.

    1978-01-01

    The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.

  2. Sample-Based Motion Planning in High-Dimensional and Differentially-Constrained Systems

    DTIC Science & Technology

    2010-02-01

    Reachable Set . . . 88 6-1 LittleDog Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6-2 Dog bounding up stairs ...planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories over extremely...a motion planning algorithm implemented on LittleDog, a quadruped robot . The motion planning algorithm successfully planned bounding trajectories

  3. Domain-specific perceptual causality in children depends on the spatio-temporal configuration, not motion onset

    PubMed Central

    Schlottmann, Anne; Cole, Katy; Watts, Rhianna; White, Marina

    2013-01-01

    Humans, even babies, perceive causality when one shape moves briefly and linearly after another. Motion timing is crucial in this and causal impressions disappear with short delays between motions. However, the role of temporal information is more complex: it is both a cue to causality and a factor that constrains processing. It affects ability to distinguish causality from non-causality, and social from mechanical causality. Here we study both issues with 3- to 7-year-olds and adults who saw two computer-animated squares and chose if a picture of mechanical, social or non-causality fit each event best. Prior work fit with the standard view that early in development, the distinction between the social and physical domains depends mainly on whether or not the agents make contact, and that this reflects concern with domain-specific motion onset, in particular, whether the motion is self-initiated or not. The present experiments challenge both parts of this position. In Experiments 1 and 2, we showed that not just spatial, but also animacy and temporal information affect how children distinguish between physical and social causality. In Experiments 3 and 4 we showed that children do not seem to use spatio-temporal information in perceptual causality to make inferences about self- or other-initiated motion onset. Overall, spatial contact may be developmentally primary in domain-specific perceptual causality in that it is processed easily and is dominant over competing cues, but it is not the only cue used early on and it is not used to infer motion onset. Instead, domain-specific causal impressions may be automatic reactions to specific perceptual configurations, with a complex role for temporal information. PMID:23874308

  4. A motion algorithm to extract physical and motion parameters of mobile targets from cone-beam computed tomographic images.

    PubMed

    Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad

    2016-05-17

    A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern that can be approximated with a simple sinusoidal function. This algorithm has potential applications in diagnostic CT imaging and radiotherapy in terms of motion management.

  5. Neural dynamics for landmark orientation and angular path integration

    PubMed Central

    Seelig, Johannes D.; Jayaraman, Vivek

    2015-01-01

    Summary Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed flies walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body — a toroidal structure in the center of the fly brain. The population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity — a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors — network structures proposed to support the function of navigational brain circuits. PMID:25971509

  6. Feature-selective attention: evidence for a decline in old age.

    PubMed

    Quigley, Cliodhna; Andersen, Søren K; Schulze, Lars; Grunwald, Martin; Müller, Matthias M

    2010-04-19

    Although attention in older adults is an active research area, feature-selective aspects have not yet been explicitly studied. Here we report the results of an exploratory study involving directed changes in feature-selective attention. The stimuli used were two random dot kinematograms (RDKs) of different colours, superimposed and centrally presented. A colour cue with random onset after the beginning of each trial instructed young and older subjects to attend to one of the RDKs and detect short intervals of coherent motion while ignoring analogous motion events in the non-cued RDK. Behavioural data show that older adults could detect motion, but discriminated target from distracter motion less reliably than young adults. The method of frequency tagging allowed us to separate the EEG responses to the attended and ignored stimuli and directly compare steady-state visual evoked potential (SSVEP) amplitudes elicited by each stimulus before and after cue onset. We found that younger adults show a clear attentional enhancement of SSVEP amplitude in the post-cue interval, while older adults' SSVEP responses to attended and ignored stimuli do not differ. Thus, in situations where attentional selection cannot be spatially resolved, older adults show a deficit in selection that is not shared by young adults. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  7. Clustering social cues to determine social signals: developing learning algorithms using the "n-most likely states" approach

    NASA Astrophysics Data System (ADS)

    Best, Andrew; Kapalo, Katelynn A.; Warta, Samantha F.; Fiore, Stephen M.

    2016-05-01

    Human-robot teaming largely relies on the ability of machines to respond and relate to human social signals. Prior work in Social Signal Processing has drawn a distinction between social cues (discrete, observable features) and social signals (underlying meaning). For machines to attribute meaning to behavior, they must first understand some probabilistic relationship between the cues presented and the signal conveyed. Using data derived from a study in which participants identified a set of salient social signals in a simulated scenario and indicated the cues related to the perceived signals, we detail a learning algorithm, which clusters social cue observations and defines an "N-Most Likely States" set for each cluster. Since multiple signals may be co-present in a given simulation and a set of social cues often maps to multiple social signals, the "N-Most Likely States" approach provides a dramatic improvement over typical linear classifiers. We find that the target social signal appears in a "3 most-likely signals" set with up to 85% probability. This results in increased speed and accuracy on large amounts of data, which is critical for modeling social cognition mechanisms in robots to facilitate more natural human-robot interaction. These results also demonstrate the utility of such an approach in deployed scenarios where robots need to communicate with human teammates quickly and efficiently. In this paper, we detail our algorithm, comparative results, and offer potential applications for robot social signal detection and machine-aided human social signal detection.

  8. Impaired smooth-pursuit in Parkinson's disease: normal cue-information memory, but dysfunction of extra-retinal mechanisms for pursuit preparation and execution.

    PubMed

    Fukushima, Kikuro; Ito, Norie; Barnes, Graham R; Onishi, Sachiyo; Kobayashi, Nobuyoshi; Takei, Hidetoshi; Olley, Peter M; Chiba, Susumu; Inoue, Kiyoharu; Warabi, Tateo

    2015-03-01

    While retinal image motion is the primary input for smooth-pursuit, its efficiency depends on cognitive processes including prediction. Reports are conflicting on impaired prediction during pursuit in Parkinson's disease. By separating two major components of prediction (image motion direction memory and movement preparation) using a memory-based pursuit task, and by comparing tracking eye movements with those during a simple ramp-pursuit task that did not require visual memory, we examined smooth-pursuit in 25 patients with Parkinson's disease and compared the results with 14 age-matched controls. In the memory-based pursuit task, cue 1 indicated visual motion direction, whereas cue 2 instructed the subjects to prepare to pursue or not to pursue. Based on the cue-information memory, subjects were asked to pursue the correct spot from two oppositely moving spots or not to pursue. In 24/25 patients, the cue-information memory was normal, but movement preparation and execution were impaired. Specifically, unlike controls, most of the patients (18/24 = 75%) lacked initial pursuit during the memory task and started tracking the correct spot by saccades. Conversely, during simple ramp-pursuit, most patients (83%) exhibited initial pursuit. Popping-out of the correct spot motion during memory-based pursuit was ineffective for enhancing initial pursuit. The results were similar irrespective of levodopa/dopamine agonist medication. Our results indicate that the extra-retinal mechanisms of most patients are dysfunctional in initiating memory-based (not simple ramp) pursuit. A dysfunctional pursuit loop between frontal eye fields (FEF) and basal ganglia may contribute to the impairment of extra-retinal mechanisms, resulting in deficient pursuit commands from the FEF to brainstem. © 2015 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  9. Perceived orientation of a runway model in nonpilots during simulated night approaches to landing.

    DOT National Transportation Integrated Search

    1977-07-01

    Illusions due to reduced visual cues at night have long been cited as contributing to the dangerous tendency of pilots to fly too low during night landing approaches. The cue of motion parallax (a difference in rate of apparent movement of objects in...

  10. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  11. A systematic comparison between visual cues for boundary detection.

    PubMed

    Mély, David A; Kim, Junkyung; McGill, Mason; Guo, Yuliang; Serre, Thomas

    2016-03-01

    The detection of object boundaries is a critical first step for many visual processing tasks. Multiple cues (we consider luminance, color, motion and binocular disparity) available in the early visual system may signal object boundaries but little is known about their relative diagnosticity and how to optimally combine them for boundary detection. This study thus aims at understanding how early visual processes inform boundary detection in natural scenes. We collected color binocular video sequences of natural scenes to construct a video database. Each scene was annotated with two full sets of ground-truth contours (one set limited to object boundaries and another set which included all edges). We implemented an integrated computational model of early vision that spans all considered cues, and then assessed their diagnosticity by training machine learning classifiers on individual channels. Color and luminance were found to be most diagnostic while stereo and motion were least. Combining all cues yielded a significant improvement in accuracy beyond that of any cue in isolation. Furthermore, the accuracy of individual cues was found to be a poor predictor of their unique contribution for the combination. This result suggested a complex interaction between cues, which we further quantified using regularization techniques. Our systematic assessment of the accuracy of early vision models for boundary detection together with the resulting annotated video dataset should provide a useful benchmark towards the development of higher-level models of visual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Gravity Cues Embedded in the Kinematics of Human Motion Are Detected in Form-from-Motion Areas of the Visual System and in Motor-Related Areas

    PubMed Central

    Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J. J.; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine

    2017-01-01

    The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer’s motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex. PMID:28861024

  13. Modulation frequency as a cue for auditory speed perception.

    PubMed

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  14. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  15. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees.

    PubMed

    Kastberger, Gerald; Maurer, Michael; Weihmann, Frank; Ruether, Matthias; Hoetzl, Thomas; Kranner, Ilse; Bischof, Horst

    2011-02-08

    The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters.

  16. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    PubMed Central

    2011-01-01

    Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that involve active and passive movements of individual agents in densely packed clusters. PMID:21303539

  17. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.

    PubMed

    Bissmeyer, Susan R S; Goldsworthy, Raymond L

    2017-09-01

    Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.

  18. Local and global aspects of biological motion perception in children born at very low birth weight

    PubMed Central

    Williamson, K. E.; Jakobson, L. S.; Saunders, D. R.; Troje, N. F.

    2015-01-01

    Biological motion perception can be assessed using a variety of tasks. In the present study, 8- to 11-year-old children born prematurely at very low birth weight (<1500 g) and matched, full-term controls completed tasks that required the extraction of local motion cues, the ability to perceptually group these cues to extract information about body structure, and the ability to carry out higher order processes required for action recognition and person identification. Preterm children exhibited difficulties in all 4 aspects of biological motion perception. However, intercorrelations between test scores were weak in both full-term and preterm children—a finding that supports the view that these processes are relatively independent. Preterm children also displayed more autistic-like traits than full-term peers. In preterm (but not full-term) children, these traits were negatively correlated with performance in the task requiring structure-from-motion processing, r(30) = −.36, p < .05), but positively correlated with the ability to extract identity, r(30) = .45, p < .05). These findings extend previous reports of vulnerability in systems involved in processing dynamic cues in preterm children and suggest that a core deficit in social perception/cognition may contribute to the development of the social and behavioral difficulties even in members of this population who are functioning within the normal range intellectually. The results could inform the development of screening, diagnostic, and intervention tools. PMID:25103588

  19. Ambiguous Tilt and Translation Motion Cues in Astronauts After Space Flight (ZAG)

    NASA Astrophysics Data System (ADS)

    Clement, Guilles; Harm, Deborah; Rupert, Angus; Beaton, Kara; Wood, Scott

    2008-06-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with visual, proprioceptive, and somatosensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions following transitions between gravity levels. This joint ESA-NASA pre- and post-flight experiment is designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances in astronauts following short-duration space flights. Specifically, this study addresses three questions: (1) What adaptive changes occur in eye movements and motion perception in response to different combinations of tilt and translation motion? (2) Do adaptive changes in tilt-translation responses impair ability to manually control vehicle orientation? (3) Can sensory substitution aids (e.g., tactile) mitigate risks associated with manual control of vehicle orientation?

  20. The moving minimum audible angle is smaller during self motion than during source motion

    PubMed Central

    Brimijoin, W. Owen; Akeroyd, Michael A.

    2014-01-01

    We are rarely perfectly still: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system—in a manner not unlike the vestibulo-ocular reflex—works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion. We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create “head-stabilized” signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA). This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this “self-motion” condition we measured MMAA in a second “source-motion” condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition. For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1–2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues. PMID:25228856

  1. The effect of simulator motion cues on initial training of airline pilots

    DOT National Transportation Integrated Search

    2005-08-15

    Two earlier studies conducted in the framework of the Federal Aviation Administration/Volpe Flight Simulator Human Factors Program examining the effect of simulator motion on recurrent training and evaluation of airline pilots have found that in the ...

  2. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  3. Comparison of the visual perception of a runway model in pilots and nonpilots during simulated night landing approaches.

    DOT National Transportation Integrated Search

    1978-03-01

    At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...

  4. Motion parallax in immersive cylindrical display systems

    NASA Astrophysics Data System (ADS)

    Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.

    2012-03-01

    Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.

  5. Gender differences in the relationship between social communication and emotion recognition.

    PubMed

    Kothari, Radha; Skuse, David; Wakefield, Justin; Micali, Nadia

    2013-11-01

    To investigate the association between autistic traits and emotion recognition in a large community sample of children using facial and social motion cues, additionally stratifying by gender. A general population sample of 3,666 children from the Avon Longitudinal Study of Parents and Children (ALSPAC) were assessed on their ability to correctly recognize emotions using the faces subtest of the Diagnostic Analysis of Non-Verbal Accuracy, and the Emotional Triangles Task, a novel test assessing recognition of emotion from social motion cues. Children with autistic-like social communication difficulties, as assessed by the Social Communication Disorders Checklist, were compared with children without such difficulties. Autistic-like social communication difficulties were associated with poorer recognition of emotion from social motion cues in both genders, but were associated with poorer facial emotion recognition in boys only (odds ratio = 1.9, 95% CI = 1.4, 2.6, p = .0001). This finding must be considered in light of lower power to detect differences in girls. In this community sample of children, greater deficits in social communication skills are associated with poorer discrimination of emotions, implying there may be an underlying continuum of liability to the association between these characteristics. As a similar degree of association was observed in both genders on a novel test of social motion cues, the relatively good performance of girls on the more familiar task of facial emotion discrimination may be due to compensatory mechanisms. Our study might indicate the existence of a cognitive process by which girls with underlying autistic traits can compensate for their covert deficits in emotion recognition, although this would require further investigation. Copyright © 2013 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  6. Localization Performance of Multiple Vibrotactile Cues on Both Arms.

    PubMed

    Wang, Dangxiao; Peng, Cong; Afzal, Naqash; Li, Weiang; Wu, Dong; Zhang, Yuru

    2018-01-01

    To present information using vibrotactile stimuli in wearable devices, it is fundamental to understand human performance of localizing vibrotactile cues across the skin surface. In this paper, we studied human ability to identify locations of multiple vibrotactile cues activated simultaneously on both arms. Two haptic bands were mounted in proximity to the elbow and shoulder joints on each arm, and two vibrotactile motors were mounted on each band to provide vibration cues to the dorsal and palmar side of the arm. The localization performance under four conditions were compared, with the number of the simultaneously activated cues varying from one to four in each condition. Experimental results illustrate that the rate of correct localization decreases linearly with the increase in the number of activated cues. It was 27.8 percent for three activated cues, and became even lower for four activated cues. An analysis of the correct rate and error patterns show that the layout of vibrotactile cues can have significant effects on the localization performance of multiple vibrotactile cues. These findings might provide guidelines for using vibrotactile cues to guide the simultaneous motion of multiple joints on both arms.

  7. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  8. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement

    PubMed Central

    Rogers, Cassandra; Warren, Paul A.

    2017-01-01

    Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement. PMID:29201335

  9. Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.

    DTIC Science & Technology

    1987-06-24

    bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the

  10. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. The First Time Ever I Saw Your Feet: Inversion Effect in Newborns' Sensitivity to Biological Motion

    ERIC Educational Resources Information Center

    Bardi, Lara; Regolin, Lucia; Simion, Francesca

    2014-01-01

    Inversion effect in biological motion perception has been recently attributed to an innate sensitivity of the visual system to the gravity-dependent dynamic of the motion. However, the specific cues that determine the inversion effect in naïve subjects were never investigated. In the present study, we have assessed the contribution of the local…

  12. Visual and motion cueing in helicopter simulation

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1985-01-01

    Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.

  13. Sensorimotor Adaptations Following Exposure to Ambiguous Inertial Motion Cues

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Harm, D. L.; Reschke, M. F.; Rupert, A. H.; Clement, G. R.

    2009-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive accurate spatial orientation awareness. We hypothesize that multi-sensory integration will be adaptively optimized in altered gravity environments based on the dynamics of other sensory information available, with greater changes in otolith-mediated responses in the mid-frequency range where there is a crossover of tilt and translation responses. The primary goals of this ground-based research investigation are to explore physiological mechanisms and operational implications of tilt-translation disturbances during and following re-entry, and to evaluate a tactile prosthesis as a countermeasure for improving control of whole-body orientation.

  14. Kernelized correlation tracking with long-term motion cues

    NASA Astrophysics Data System (ADS)

    Lv, Yunqiu; Liu, Kai; Cheng, Fei

    2018-04-01

    Robust object tracking is a challenging task in computer vision due to interruptions such as deformation, fast motion and especially, occlusion of tracked object. When occlusions occur, image data will be unreliable and is insufficient for the tracker to depict the object of interest. Therefore, most trackers are prone to fail under occlusion. In this paper, an occlusion judgement and handling method based on segmentation of the target is proposed. If the target is occluded, the speed and direction of it must be different from the objects occluding it. Hence, the value of motion features are emphasized. Considering the efficiency and robustness of Kernelized Correlation Filter Tracking (KCF), it is adopted as a pre-tracker to obtain a predicted position of the target. By analyzing long-term motion cues of objects around this position, the tracked object is labelled. Hence, occlusion could be detected easily. Experimental results suggest that our tracker achieves a favorable performance and effectively handles occlusion and drifting problems.

  15. Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.

    PubMed

    Suganuma, Mutsumi; Yokosawa, Kazuhiko

    2006-01-01

    In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.

  16. Neural processing of gravito-inertial cues in humans. II. Influence of the semicircular canals during eccentric rotation.

    PubMed

    Merfeld, D M; Zupan, L H; Gifford, C A

    2001-04-01

    All linear accelerometers, including the otolith organs, respond equivalently to gravity and linear acceleration. To investigate how the nervous system resolves this ambiguity, we measured perceived roll tilt and reflexive eye movements in humans in the dark using two different centrifugation motion paradigms (fixed radius and variable radius) combined with two different subject orientations (facing-motion and back-to-motion). In the fixed radius trials, the radius at which the subject was seated was held constant while the rotation speed was changed to yield changes in the centrifugal force. In variable radius trials, the rotation speed was held constant while the radius was varied to yield a centrifugal force that nearly duplicated that measured during the fixed radius condition. The total gravito-inertial force (GIF) measured by the otolith organs was nearly identical in the two paradigms; the primary difference was the presence (fixed radius) or absence (variable radius) of yaw rotational cues. We found that the yaw rotational cues had a large statistically significant effect on the time course of perceived tilt, demonstrating that yaw rotational cues contribute substantially to the neural processing of roll tilt. We also found that the orientation of the subject relative to the centripetal acceleration had a dramatic influence on the eye movements measured during fixed radius centrifugation. Specifically, the horizontal vestibuloocular reflex (VOR) measured in our human subjects was always greater when the subject faced the direction of motion than when the subjects had their backs toward the motion during fixed radius rotation. This difference was consistent with the presence of a horizontal translational VOR response induced by the centripetal acceleration. Most importantly, by comparing the perceptual tilt responses to the eye movement responses, we found that the translational VOR component decayed as the subjective tilt indication aligned with the tilt of the GIF. This was true for both the fixed radius and variable radius conditions even though the time course of the responses was significantly different for these two conditions. These findings are consistent with the hypothesis that the nervous system resolves the ambiguous measurements of GIF into neural estimates of gravity and linear acceleration. More generally, these findings are consistent with the hypothesis that the nervous system uses internal models to process and interpret sensory motor cues.

  17. SU-E-J-150: Four-Dimensional Cone-Beam CT Algorithm by Extraction of Physical and Motion Parameter of Mobile Targets Retrospective to Image Reconstruction with Motion Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Ahmad, S; Alsbou, N

    Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases which has potential applications in diagnostic CT imaging and radiotherapy.« less

  18. Response of hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo to visual and chemical cues arising from prey.

    PubMed

    Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M

    2009-01-01

    Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.

  19. SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhou, S; Williams, C; Ionascu, D

    2016-06-15

    Purpose: To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. Methods: Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived weremore » compared using patient 4DCT scans. Results: Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. Conclusion: The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.« less

  20. The influence of motion quality on responses towards video playback stimuli.

    PubMed

    Ware, Emma; Saunders, Daniel R; Troje, Nikolaus F

    2015-05-11

    Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR), are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia) response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p) frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour. © 2015. Published by The Company of Biologists Ltd.

  1. Nocturnal insects use optic flow for flight control

    PubMed Central

    Baird, Emily; Kreiss, Eva; Wcislo, William; Warrant, Eric; Dacke, Marie

    2011-01-01

    To avoid collisions when navigating through cluttered environments, flying insects must control their flight so that their sensory systems have time to detect obstacles and avoid them. To do this, day-active insects rely primarily on the pattern of apparent motion generated on the retina during flight (optic flow). However, many flying insects are active at night, when obtaining reliable visual information for flight control presents much more of a challenge. To assess whether nocturnal flying insects also rely on optic flow cues to control flight in dim light, we recorded flights of the nocturnal neotropical sweat bee, Megalopta genalis, flying along an experimental tunnel when: (i) the visual texture on each wall generated strong horizontal (front-to-back) optic flow cues, (ii) the texture on only one wall generated these cues, and (iii) horizontal optic flow cues were removed from both walls. We find that Megalopta increase their groundspeed when horizontal motion cues in the tunnel are reduced (conditions (ii) and (iii)). However, differences in the amount of horizontal optic flow on each wall of the tunnel (condition (ii)) do not affect the centred position of the bee within the flight tunnel. To better understand the behavioural response of Megalopta, we repeated the experiments on day-active bumble-bees (Bombus terrestris). Overall, our findings demonstrate that despite the limitations imposed by dim light, Megalopta—like their day-active relatives—rely heavily on vision to control flight, but that they use visual cues in a different manner from diurnal insects. PMID:21307047

  2. Attention to multiple locations is limited by spatial working memory capacity.

    PubMed

    Close, Alex; Sapir, Ayelet; Burnett, Katherine; d'Avossa, Giovanni

    2014-08-21

    What limits the ability to attend several locations simultaneously? There are two possibilities: Either attention cannot be divided without incurring a cost, or spatial memory is limited and observers forget which locations to monitor. We compared motion discrimination when attention was directed to one or multiple locations by briefly presented central cues. The cues were matched for the amount of spatial information they provided. Several random dot kinematograms (RDKs) followed the spatial cues; one of them contained task-relevant, coherent motion. When four RDKs were presented, discrimination accuracy was identical when one and two locations were indicated by equally informative cues. However, when six RDKs were presented, discrimination accuracy was higher following one rather than multiple location cues. We examined whether memory of the cued locations was diminished under these conditions. Recall of the cued locations was tested when participants attended the cued locations and when they did not attend the cued locations. Recall was inaccurate only when the cued locations were attended. Finally, visually marking the cued locations, following one and multiple location cues, equalized discrimination performance, suggesting that participants could attend multiple locations when they did not have to remember which ones to attend. We conclude that endogenously dividing attention between multiple locations is limited by inaccurate recall of the attended locations and that attention poses separate demands on the same central processes used to remember spatial information, even when the locations attended and those held in memory are the same. © 2014 ARVO.

  3. A Motion Detection Algorithm Using Local Phase Information

    PubMed Central

    Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin

    2016-01-01

    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882

  4. M.I.T./Canadian vestibular experiments on the Spacelab-1 mission. IV - Space motion sickness: Symptoms, stimuli, and predictability

    NASA Technical Reports Server (NTRS)

    Oman, C. M.; Lichtenberg, B. K.; Mccoy, R. K.; Money, K. E.

    1986-01-01

    Three cases of motion sickness that occurred on Spacelab-1 are described. The relation between head movements and symptom intensity is examined. The effects of visual, tactile, and proprioceptive orientation cues on motion sickness are studied. The effectiveness of the drugs used is evaluated and it is observed that the drugs reduce the frequency of vomiting and overall discomfort. Preflight and postflight motion sickness susceptibility data are presented.

  5. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1992-01-01

    The overall objective of this proposal is to understand the relationship between human orientation control and motion sickness susceptibility. Three areas related to orientation control will be investigated. These three areas are (1) reflexes associated with the control of eye movements and posture, (2) the perception of body rotation and position with respect to gravity, and (3) the strategies used to resolve sensory conflict situations which arise when different sensory systems provide orientation cues which are not consistent with one another or with previous experience. Of particular interest is the possibility that a subject may be able to ignore an inaccurate sensory modality in favor of one or more other sensory modalities which do provide accurate orientation reference information. We refer to this process as sensory selection. This proposal will attempt to quantify subjects' sensory selection abilities and determine if this ability confers some immunity to the development of motion sickness symptoms. Measurements of reflexes, motion perception, sensory selection abilities, and motion sickness susceptibility will concentrate on pitch and roll motions since these seem most relevant to the space motion sickness problem. Vestibulo-ocular (VOR) and oculomotor reflexes will be measured using a unique two-axis rotation device developed in our laboratory over the last seven years. Posture control reflexes will be measured using a movable posture platform capable of independently altering proprioceptive and visual orientation cues. Motion perception will be quantified using closed loop feedback technique developed by Zacharias and Young (Exp Brain Res, 1981). This technique requires a subject to null out motions induced by the experimenter while being exposed to various confounding sensory orientation cues. A subject's sensory selection abilities will be measured by the magnitude and timing of his reactions to changes in sensory environments. Motion sickness susceptibility will be measured by the time required to induce characteristic changes in the pattern of electrogastrogram recordings while exposed to various sensory environments during posture and motion perception tests. The results of this work are relevant to NASA's interest in understanding the etiology of space motion sickness. If any of the reflex, perceptual, or sensory selection abilities of subjects are found to correlate with motion sickness susceptibility, this work may be an important step in suggesting a method of predicting motion sickness susceptibility. If sensory selection can provide a means to avoid sensory conflict, then further work may lead to training programs which could enhance a subject's sensory selection ability and therefore minimize motion sickness susceptibility.

  6. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson's Disease.

    PubMed

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J A

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson's disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability.

  7. SU-F-J-76: Evaluation of the Performance of Different Deformable Image Registration Algorithms in Helical, Axial and Cone-Beam CT Images of a Mobile Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaskowiak, J; Ahmad, S; Ali, I

    Purpose: To investigate quantitatively the performance of different deformable-image-registration algorithms (DIR) with helical (HCT), axial (ACT) and cone-beam CT (CBCT) by evaluating the variations in the CT-numbers and lengths of targets moving with controlled motion-patterns. Methods: Four DIR-algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from the DIRART-software are used to register CT-images of a mobile-phantom. A mobile-phantom is scanned with different imaging techniques that include helical, axial and cone-beam CT. The phantom includes three targets with different lengths that are made from water-equivalent material and inserted in low-density-foam which is moved with adjustable motion-amplitudes and frequencies. Results: Most of themore » DIR-algorithms are able to produce the lengths of the stationary-targets, however, they do not produce the CT-number values in CBCT. The image-artifacts induced by motion are more regular in CBCT imaging where the mobile-target elongation increases linearly with motion-amplitude. In ACT and HCT, the motion-artifacts are irregular where some mobile -targets are elongated or shrunk depending on the motion-phase during imaging. The DIR-algorithms are successful in deforming the images of the mobile-targets to the images of the stationary-targets producing the CT-number values and length of the target for motion-amplitudes < 20 mm. Similarly in ACT, all DIR-algorithms produced the actual CT-number and length of the stationary-targets for motion-amplitudes < 15 mm. As stronger motion-artifacts are induced in HCT and ACT, DIR-algorithms fail to produce CT-values and shape of the stationary-targets and fast-demons-algorithm has worst performance. Conclusion: Most of DIR-algorithms produce the CT-number values and lengths of the stationary-targets in HCT and ACT images that has motion-artifacts induced by small motion-amplitudes. As motion-amplitudes increase, the DIR-algorithms fail to deform mobile-target images to the stationary-images in HCT and ACT. In CBCT, DIR-algorithms are successful in producing length and shape of the stationary-targets, however, they fail to produce the accurate CT-number level.« less

  8. Algorithm for constructing the programmed motion of a bounding vehicle for the flight phase

    NASA Technical Reports Server (NTRS)

    Lapshin, V. V.

    1979-01-01

    The construction of the programmed motion of a multileg bounding vehicle in the flight was studied. An algorithm is given for solving the boundary value problem for constructing this programmed motion. If the motion is shown to be symmetrical, a simplified use of the algorithm can be applied. A method is proposed for nonimpact of the legs during lift-off of the vehicle, and for softness at touchdown. Tables are utilized to construct this programmed motion over a broad set of standard motion conditions.

  9. Simulator Motion as a Factor in Flight Simulator Training Effectiveness.

    ERIC Educational Resources Information Center

    Jacobs, Robert S.

    The document reviews the literature concerning the training effectiveness of flight simulators and describes an experiment in progress at the University of Illinois' Institute of Aviation which is an initial attempt to develop systematically the relationship between motion cue fidelity and resultant training effectiveness. The literature review…

  10. Biological Motion Cues Trigger Reflexive Attentional Orienting

    ERIC Educational Resources Information Center

    Shi, Jinfu; Weng, Xuchu; He, Sheng; Jiang, Yi

    2010-01-01

    The human visual system is extremely sensitive to biological signals around us. In the current study, we demonstrate that biological motion walking direction can induce robust reflexive attentional orienting. Following a brief presentation of a central point-light walker walking towards either the left or right direction, observers' performance…

  11. Simulation of a synergistic six-post motion system on the flight simulator for advanced aircraft at NASA-Ames

    NASA Technical Reports Server (NTRS)

    Bose, S. C.; Parris, B. L.

    1977-01-01

    Motion system drive philosophy and corresponding real-time software have been developed for the purpose of simulating the characteristics of a typical synergistic Six-Post Motion System (SPMS) on the Flight Simulator for Advanced Aircraft (FSAA) at NASA-Ames which is a non-synergistic motion system. This paper gives a brief description of these two types of motion systems and the general methods of producing motion cues of the FSAA. An actuator extension transformation which allows the simulation of a typical SPMS by appropriate drive washout and variable position limiting is described.

  12. Mechanisms of time-based figure-ground segregation.

    PubMed

    Kandil, Farid I; Fahle, Manfred

    2003-11-01

    Figure-ground segregation can rely on purely temporal information, that is, on short temporal delays between positional changes of elements in figure and ground (Kandil, F.I. & Fahle, M. (2001) Eur. J. Neurosci., 13, 2004-2008). Here, we investigate the underlying mechanisms by measuring temporal segregation thresholds for various kinds of motion cues. Segregation can rely on monocular first-order motion (based on luminance modulation) and second-order motion cues (contrast modulation) with a high temporal resolution of approximately 20 ms. The mechanism can also use isoluminant motion with a reduced temporal resolution of 60 ms. Figure-ground segregation can be achieved even at presentation frequencies too high for human subjects to inspect successive frames individually. In contrast, when stimuli are presented dichoptically, i.e. separately to both eyes, subjects are unable to perceive any segregation, irrespective of temporal frequency. We propose that segregation in these displays is detected by a mechanism consisting of at least two stages. On the first level, standard motion or flicker detectors signal local positional changes (flips). On the second level, a segregation mechanism combines the local activities of the low-level detectors with high temporal precision. Our findings suggest that the segregation mechanism can rely on monocular detectors but not on binocular mechanisms. Moreover, the results oppose the idea that segregation in these displays is achieved by motion detectors of a higher order (motion-from-motion), but favour mechanisms sensitive to short temporal delays even without activation of higher-order motion detectors.

  13. Ambiguous Tilt and Translation Motion Cues after Space Flight and Otolith Assessment during Post-Flight Re-Adaptation

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Clarke, A. H.; Harm, D. L.; Rupert, A. H.; Clement, G. R.

    2009-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation and perceptual illusions following Gtransitions. These studies are designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short duration space flights.

  14. Apollo Docking with the LEM Target

    NASA Image and Video Library

    2012-09-07

    Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. This picture shows a later configuration of the Apollo docking with the LEM target. A.W. Vogeley described the simulator as follows: The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect. -- Published in A.W. Vogeley, Piloted Space-Flight Simulation at Langley Research Center, Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966.

  15. Diagnostic Performance of a Novel Coronary CT Angiography Algorithm: Prospective Multicenter Validation of an Intracycle CT Motion Correction Algorithm for Diagnostic Accuracy.

    PubMed

    Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K

    2018-06-01

    Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.

  16. Reanimating patients: cardio-respiratory CT and MR motion phantoms based on clinical CT patient data

    NASA Astrophysics Data System (ADS)

    Mayer, Johannes; Sauppe, Sebastian; Rank, Christopher M.; Sawall, Stefan; Kachelrieß, Marc

    2017-03-01

    Until today several algorithms have been developed that reduce or avoid artifacts caused by cardiac and respiratory motion in computed tomography (CT). The motion information is converted into so-called motion vector fields (MVFs) and used for motion compensation (MoCo) during the image reconstruction. To analyze these algorithms quantitatively there is the need for ground truth patient data displaying realistic motion. We developed a method to generate a digital ground truth displaying realistic cardiac and respiratory motion that can be used as a tool to assess MoCo algorithms. By the use of available MoCo methods we measured the motion in CT scans with high spatial and temporal resolution and transferred the motion information onto patient data with different anatomy or imaging modality, thereby reanimating the patient virtually. In addition to these images the ground truth motion information in the form of MVFs is available and can be used to benchmark the MVF estimation of MoCo algorithms. We here applied the method to generate 20 CT volumes displaying detailed cardiac motion that can be used for cone-beam CT (CBCT) simulations and a set of 8 MR volumes displaying respiratory motion. Our method is able to reanimate patient data virtually. In combination with the MVFs it serves as a digital ground truth and provides an improved framework to assess MoCo algorithms.

  17. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.

    PubMed

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-07

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers.

  18. An effective attentional set for a specific colour does not prevent capture by infrequently presented motion distractors.

    PubMed

    Retell, James D; Becker, Stefanie I; Remington, Roger W

    2016-01-01

    An organism's survival depends on the ability to rapidly orient attention to unanticipated events in the world. Yet, the conditions needed to elicit such involuntary capture remain in doubt. Especially puzzling are spatial cueing experiments, which have consistently shown that involuntary shifts of attention to highly salient distractors are not determined by stimulus properties, but instead are contingent on attentional control settings induced by task demands. Do we always need to be set for an event to be captured by it, or is there a class of events that draw attention involuntarily even when unconnected to task goals? Recent results suggest that a task-irrelevant event will capture attention on first presentation, suggesting that salient stimuli that violate contextual expectations might automatically capture attention. Here, we investigated the role of contextual expectation by examining whether an irrelevant motion cue that was presented only rarely (∼3-6% of trials) would capture attention when observers had an active set for a specific target colour. The motion cue had no effect when presented frequently, but when rare produced a pattern of interference consistent with attentional capture. The critical dependence on the frequency with which the irrelevant motion singleton was presented is consistent with early theories of involuntary orienting to novel stimuli. We suggest that attention will be captured by salient stimuli that violate expectations, whereas top-down goals appear to modulate capture by stimuli that broadly conform to contextual expectations.

  19. Quantifying the degree of persistence in random amoeboid motion based on the Hurst exponent of fractional Brownian motion.

    PubMed

    Makarava, Natallia; Menz, Stephan; Theves, Matthias; Huisinga, Wilhelm; Beta, Carsten; Holschneider, Matthias

    2014-10-01

    Amoebae explore their environment in a random way, unless external cues like, e.g., nutrients, bias their motion. Even in the absence of cues, however, experimental cell tracks show some degree of persistence. In this paper, we analyzed individual cell tracks in the framework of a linear mixed effects model, where each track is modeled by a fractional Brownian motion, i.e., a Gaussian process exhibiting a long-term correlation structure superposed on a linear trend. The degree of persistence was quantified by the Hurst exponent of fractional Brownian motion. Our analysis of experimental cell tracks of the amoeba Dictyostelium discoideum showed a persistent movement for the majority of tracks. Employing a sliding window approach, we estimated the variations of the Hurst exponent over time, which allowed us to identify points in time, where the correlation structure was distorted ("outliers"). Coarse graining of track data via down-sampling allowed us to identify the dependence of persistence on the spatial scale. While one would expect the (mode of the) Hurst exponent to be constant on different temporal scales due to the self-similarity property of fractional Brownian motion, we observed a trend towards stronger persistence for the down-sampled cell tracks indicating stronger persistence on larger time scales.

  20. Sociability modifies dogs' sensitivity to biological motion of different social relevance.

    PubMed

    Ishikawa, Yuko; Mills, Daniel; Willmott, Alexander; Mullineaux, David; Guo, Kun

    2018-03-01

    Preferential attention to living creatures is believed to be an intrinsic capacity of the visual system of several species, with perception of biological motion often studied and, in humans, it correlates with social cognitive performance. Although domestic dogs are exceptionally attentive to human social cues, it is unknown whether their sociability is associated with sensitivity to conspecific and heterospecific biological motion cues of different social relevance. We recorded video clips of point-light displays depicting a human or dog walking in either frontal or lateral view. In a preferential looking paradigm, dogs spontaneously viewed 16 paired point-light displays showing combinations of normal/inverted (control condition), human/dog and frontal/lateral views. Overall, dogs looked significantly longer at frontal human point-light display versus the inverted control, probably due to its clearer social/biological relevance. Dogs' sociability, assessed through owner-completed questionnaires, further revealed that low-sociability dogs preferred the lateral point-light display view, whereas high-sociability dogs preferred the frontal view. Clearly, dogs can recognize biological motion, but their preference is influenced by their sociability and the stimulus salience, implying biological motion perception may reflect aspects of dogs' social cognition.

  1. Recognizing human activities using appearance metric feature and kinematics feature

    NASA Astrophysics Data System (ADS)

    Qian, Huimin; Zhou, Jun; Lu, Xinbiao; Wu, Xinye

    2017-05-01

    The problem of automatically recognizing human activities from videos through the fusion of the two most important cues, appearance metric feature and kinematics feature, is considered. And a system of two-dimensional (2-D) Poisson equations is introduced to extract the more discriminative appearance metric feature. Specifically, the moving human blobs are first detected out from the video by background subtraction technique to form a binary image sequence, from which the appearance feature designated as the motion accumulation image and the kinematics feature termed as centroid instantaneous velocity are extracted. Second, 2-D discrete Poisson equations are employed to reinterpret the motion accumulation image to produce a more differentiated Poisson silhouette image, from which the appearance feature vector is created through the dimension reduction technique called bidirectional 2-D principal component analysis, considering the balance between classification accuracy and time consumption. Finally, a cascaded classifier based on the nearest neighbor classifier and two directed acyclic graph support vector machine classifiers, integrated with the fusion of the appearance feature vector and centroid instantaneous velocity vector, is applied to recognize the human activities. Experimental results on the open databases and a homemade one confirm the recognition performance of the proposed algorithm.

  2. Goal attribution to inanimate moving objects by Japanese macaques (Macaca fuscata)

    PubMed Central

    Atsumi, Takeshi; Koda, Hiroki; Masataka, Nobuo

    2017-01-01

    Humans interpret others’ goals based on motion information, and this capacity contributes to our mental reasoning. The present study sought to determine whether Japanese macaques (Macaca fuscata) perceive goal-directedness in chasing events depicted by two geometric particles. In Experiment 1, two monkeys and adult humans were trained to discriminate between Chasing and Random sequences. We then introduced probe stimuli with various levels of correlation between the particle trajectories to examine whether participants performed the task using higher correlation. Participants chose stimuli with the highest correlations by chance, suggesting that correlations were not the discriminative cue. Experiment 2 examined whether participants focused on particle proximity. Participants differentiated between Chasing and Control sequences; the distance between two particles was identical in both. Results indicated that, like humans, the Japanese macaques did not use physical cues alone to perform the discrimination task and integrated the cues spontaneously. This suggests that goal attribution resulting from motion information is a widespread cognitive phenotype in primate species. PMID:28053305

  3. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    PubMed

    Khoramshahi, Mahdi; Shukla, Ashwini; Raffard, Stéphane; Bardy, Benoît G; Billard, Aude

    2016-01-01

    The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. This work confirms that people can exploit gaze cues to predict another person's movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user's sense of affiliation.

  4. Bottlenecks of Motion Processing during a Visual Glance: The Leaky Flask Model

    PubMed Central

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E.; Tripathy, Srimant P.

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing. PMID:24391806

  5. Bottlenecks of motion processing during a visual glance: the leaky flask model.

    PubMed

    Öğmen, Haluk; Ekiz, Onur; Huynh, Duong; Bedell, Harold E; Tripathy, Srimant P

    2013-01-01

    Where do the bottlenecks for information and attention lie when our visual system processes incoming stimuli? The human visual system encodes the incoming stimulus and transfers its contents into three major memory systems with increasing time scales, viz., sensory (or iconic) memory, visual short-term memory (VSTM), and long-term memory (LTM). It is commonly believed that the major bottleneck of information processing resides in VSTM. In contrast to this view, we show major bottlenecks for motion processing prior to VSTM. In the first experiment, we examined bottlenecks at the stimulus encoding stage through a partial-report technique by delivering the cue immediately at the end of the stimulus presentation. In the second experiment, we varied the cue delay to investigate sensory memory and VSTM. Performance decayed exponentially as a function of cue delay and we used the time-constant of the exponential-decay to demarcate sensory memory from VSTM. We then decomposed performance in terms of quality and quantity measures to analyze bottlenecks along these dimensions. In terms of the quality of information, two thirds to three quarters of the motion-processing bottleneck occurs in stimulus encoding rather than memory stages. In terms of the quantity of information, the motion-processing bottleneck is distributed, with the stimulus-encoding stage accounting for one third of the bottleneck. The bottleneck for the stimulus-encoding stage is dominated by the selection compared to the filtering function of attention. We also found that the filtering function of attention is operating mainly at the sensory memory stage in a specific manner, i.e., influencing only quantity and sparing quality. These results provide a novel and more complete understanding of information processing and storage bottlenecks for motion processing.

  6. Human short-latency ocular vergence responses produced by interocular velocity differences

    PubMed Central

    Sheliga, B. M.; Quaia, C.; FitzGibbon, E. J.; Cumming, B. G.

    2016-01-01

    We studied human short-latency vergence eye movements to a novel stimulus that produces interocular velocity differences without a changing disparity signal. Sinusoidal luminance gratings moved in opposite directions (left vs. right; up vs. down) in the two eyes. The grating seen by each eye underwent ¼-wavelength shifts with each image update. This arrangement eliminated changing disparity cues, since the phase difference between the eyes alternated between 0° and 180°. We nevertheless observed robust short-latency vergence responses (VRs), whose sign was consistent with the interocular velocity differences (IOVDs), indicating that the IOVD cue in isolation can evoke short-latency VRs. The IOVD cue was effective only when the images seen by the two eyes overlapped in space. We observed equally robust VRs for opposite horizontal motions (left in one eye, right in the other) and opposite vertical motions (up in one eye, down in the other). Whereas the former are naturally generated by objects moving in depth, the latter are not part of our normal experience. To our knowledge, this is the first demonstration of a behavioral consequence of vertical IOVD. This may reflect the fact that some neurons in area MT are sensitive to these motion signals (Czuba, Huk, Cormack, & Kohn, 2014). VRs were the strongest for spatial frequencies in the range of 0.35–1 c/°, much higher than the optimal spatial frequencies for evoking ocular-following responses observed during frontoparallel motion. This suggests that the two motion signals are detected by different neuronal populations. We also produced IOVD using moving uncorrelated one-dimensional white-noise stimuli. In this case the most effective stimuli have low speed, as predicted if the drive originates in neurons tuned to high spatial frequencies (Sheliga, Quaia, FitzGibbon, & Cumming, 2016). PMID:27548089

  7. Nocturnal insects use optic flow for flight control.

    PubMed

    Baird, Emily; Kreiss, Eva; Wcislo, William; Warrant, Eric; Dacke, Marie

    2011-08-23

    To avoid collisions when navigating through cluttered environments, flying insects must control their flight so that their sensory systems have time to detect obstacles and avoid them. To do this, day-active insects rely primarily on the pattern of apparent motion generated on the retina during flight (optic flow). However, many flying insects are active at night, when obtaining reliable visual information for flight control presents much more of a challenge. To assess whether nocturnal flying insects also rely on optic flow cues to control flight in dim light, we recorded flights of the nocturnal neotropical sweat bee, Megalopta genalis, flying along an experimental tunnel when: (i) the visual texture on each wall generated strong horizontal (front-to-back) optic flow cues, (ii) the texture on only one wall generated these cues, and (iii) horizontal optic flow cues were removed from both walls. We find that Megalopta increase their groundspeed when horizontal motion cues in the tunnel are reduced (conditions (ii) and (iii)). However, differences in the amount of horizontal optic flow on each wall of the tunnel (condition (ii)) do not affect the centred position of the bee within the flight tunnel. To better understand the behavioural response of Megalopta, we repeated the experiments on day-active bumble-bees (Bombus terrestris). Overall, our findings demonstrate that despite the limitations imposed by dim light, Megalopta-like their day-active relatives-rely heavily on vision to control flight, but that they use visual cues in a different manner from diurnal insects. This journal is © 2011 The Royal Society

  8. A comparison of form processing involved in the perception of biological and nonbiological movements

    PubMed Central

    Thurman, Steven M.; Lu, Hongjing

    2016-01-01

    Although there is evidence for specialization in the human brain for processing biological motion per se, few studies have directly examined the specialization of form processing in biological motion perception. The current study was designed to systematically compare form processing in perception of biological (human walkers) to nonbiological (rotating squares) stimuli. Dynamic form-based stimuli were constructed with conflicting form cues (position and orientation), such that the objects were perceived to be moving ambiguously in two directions at once. In Experiment 1, we used the classification image technique to examine how local form cues are integrated across space and time in a bottom-up manner. By comparing with a Bayesian observer model that embodies generic principles of form analysis (e.g., template matching) and integrates form information according to cue reliability, we found that human observers employ domain-general processes to recognize both human actions and nonbiological object movements. Experiments 2 and 3 found differential top-down effects of spatial context on perception of biological and nonbiological forms. When a background does not involve social information, observers are biased to perceive foreground object movements in the direction opposite to surrounding motion. However, when a background involves social cues, such as a crowd of similar objects, perception is biased toward the same direction as the crowd for biological walking stimuli, but not for rotating nonbiological stimuli. The model provided an accurate account of top-down modulations by adjusting the prior probabilities associated with the internal templates, demonstrating the power and flexibility of the Bayesian approach for visual form perception. PMID:26746875

  9. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    PubMed

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  10. Spatial reference frames of visual, vestibular, and multimodal heading signals in the dorsal subdivision of the medial superior temporal area.

    PubMed

    Fetsch, Christopher R; Wang, Sentao; Gu, Yong; Deangelis, Gregory C; Angelaki, Dora E

    2007-01-17

    Heading perception is a complex task that generally requires the integration of visual and vestibular cues. This sensory integration is complicated by the fact that these two modalities encode motion in distinct spatial reference frames (visual, eye-centered; vestibular, head-centered). Visual and vestibular heading signals converge in the primate dorsal subdivision of the medial superior temporal area (MSTd), a region thought to contribute to heading perception, but the reference frames of these signals remain unknown. We measured the heading tuning of MSTd neurons by presenting optic flow (visual condition), inertial motion (vestibular condition), or a congruent combination of both cues (combined condition). Static eye position was varied from trial to trial to determine the reference frame of tuning (eye-centered, head-centered, or intermediate). We found that tuning for optic flow was predominantly eye-centered, whereas tuning for inertial motion was intermediate but closer to head-centered. Reference frames in the two unimodal conditions were rarely matched in single neurons and uncorrelated across the population. Notably, reference frames in the combined condition varied as a function of the relative strength and spatial congruency of visual and vestibular tuning. This represents the first investigation of spatial reference frames in a naturalistic, multimodal condition in which cues may be integrated to improve perceptual performance. Our results compare favorably with the predictions of a recent neural network model that uses a recurrent architecture to perform optimal cue integration, suggesting that the brain could use a similar computational strategy to integrate sensory signals expressed in distinct frames of reference.

  11. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  12. Pigeons (C. livia) Follow Their Head during Turning Flight: Head Stabilization Underlies the Visual Control of Flight.

    PubMed

    Ros, Ivo G; Biewener, Andrew A

    2017-01-01

    Similar flight control principles operate across insect and vertebrate fliers. These principles indicate that robust solutions have evolved to meet complex behavioral challenges. Following from studies of visual and cervical feedback control of flight in insects, we investigate the role of head stabilization in providing feedback cues for controlling turning flight in pigeons. Based on previous observations that the eyes of pigeons remain at relatively fixed orientations within the head during flight, we test potential sensory control inputs derived from head and body movements during 90° aerial turns. We observe that periods of angular head stabilization alternate with rapid head repositioning movements (head saccades), and confirm that control of head motion is decoupled from aerodynamic and inertial forces acting on the bird's continuously rotating body during turning flapping flight. Visual cues inferred from head saccades correlate with changes in flight trajectory; whereas the magnitude of neck bending predicts angular changes in body position. The control of head motion to stabilize a pigeon's gaze may therefore facilitate extraction of important motion cues, in addition to offering mechanisms for controlling body and wing movements. Strong similarities between the sensory flight control of birds and insects may also inspire novel designs of robust controllers for human-engineered autonomous aerial vehicles.

  13. Pigeons (C. livia) Follow Their Head during Turning Flight: Head Stabilization Underlies the Visual Control of Flight

    PubMed Central

    Ros, Ivo G.; Biewener, Andrew A.

    2017-01-01

    Similar flight control principles operate across insect and vertebrate fliers. These principles indicate that robust solutions have evolved to meet complex behavioral challenges. Following from studies of visual and cervical feedback control of flight in insects, we investigate the role of head stabilization in providing feedback cues for controlling turning flight in pigeons. Based on previous observations that the eyes of pigeons remain at relatively fixed orientations within the head during flight, we test potential sensory control inputs derived from head and body movements during 90° aerial turns. We observe that periods of angular head stabilization alternate with rapid head repositioning movements (head saccades), and confirm that control of head motion is decoupled from aerodynamic and inertial forces acting on the bird's continuously rotating body during turning flapping flight. Visual cues inferred from head saccades correlate with changes in flight trajectory; whereas the magnitude of neck bending predicts angular changes in body position. The control of head motion to stabilize a pigeon's gaze may therefore facilitate extraction of important motion cues, in addition to offering mechanisms for controlling body and wing movements. Strong similarities between the sensory flight control of birds and insects may also inspire novel designs of robust controllers for human-engineered autonomous aerial vehicles. PMID:29249929

  14. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  15. SU-E-J-252: A Motion Algorithm to Extract Physical and Motion Parameters of a Mobile Target in Cone-Beam Computed Tomographic Imaging Retrospective to Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Ahmad, S; Alsbou, N

    Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less

  16. Blindsight modulation of motion perception.

    PubMed

    Intriligator, James M; Xie, Ruiman; Barton, Jason J S

    2002-11-15

    Monkey data suggest that of all perceptual abilities, motion perception is the most likely to survive striate damage. The results of studies on motion blindsight in humans, though, are mixed. We used an indirect strategy to examine how responses to visible stimuli were modulated by blind-field stimuli. In a 26-year-old man with focal striate lesions, discrimination of visible optic flow was enhanced about 7% by blind-field flow, even though discrimination of optic flow in the blind field alone (the direct strategy) was at chance. Pursuit of an imagined target using peripheral cues showed reduced variance but not increased gain with blind-field cues. Preceding blind-field prompts shortened reaction times to visible targets by about 10 msec, but there was no attentional crowding of visible stimuli by blind-field distractors. A similar efficacy of indirect blind-field optic flow modulation was found in a second patient with residual vision after focal striate damage, but not in a third with more extensive medial occipito-temporal damage. We conclude that indirect modulatory strategies are more effective than direct forced-choice methods at revealing residual motion perception after focal striate lesions.

  17. Recognizing Whispered Speech Produced by an Individual with Surgically Reconstructed Larynx Using Articulatory Movement Data

    PubMed Central

    Cao, Beiming; Kim, Myungjong; Mau, Ted; Wang, Jun

    2017-01-01

    Individuals with larynx (vocal folds) impaired have problems in controlling their glottal vibration, producing whispered speech with extreme hoarseness. Standard automatic speech recognition using only acoustic cues is typically ineffective for whispered speech because the corresponding spectral characteristics are distorted. Articulatory cues such as the tongue and lip motion may help in recognizing whispered speech since articulatory motion patterns are generally not affected. In this paper, we investigated whispered speech recognition for patients with reconstructed larynx using articulatory movement data. A data set with both acoustic and articulatory motion data was collected from a patient with surgically reconstructed larynx using an electromagnetic articulograph. Two speech recognition systems, Gaussian mixture model-hidden Markov model (GMM-HMM) and deep neural network-HMM (DNN-HMM), were used in the experiments. Experimental results showed adding either tongue or lip motion data to acoustic features such as mel-frequency cepstral coefficient (MFCC) significantly reduced the phone error rates on both speech recognition systems. Adding both tongue and lip data achieved the best performance. PMID:29423453

  18. Different motion cues are used to estimate time-to-arrival for frontoparallel and looming trajectories

    PubMed Central

    Calabro, Finnegan J.; Beardsley, Scott A.; Vaina, Lucia M.

    2012-01-01

    Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance. PMID:22056519

  19. Are face representations depth cue invariant?

    PubMed

    Dehmoobadsharifabadi, Armita; Farivar, Reza

    2016-06-01

    The visual system can process three-dimensional depth cues defining surfaces of objects, but it is unclear whether such information contributes to complex object recognition, including face recognition. The processing of different depth cues involves both dorsal and ventral visual pathways. We investigated whether facial surfaces defined by individual depth cues resulted in meaningful face representations-representations that maintain the relationship between the population of faces as defined in a multidimensional face space. We measured face identity aftereffects for facial surfaces defined by individual depth cues (Experiments 1 and 2) and tested whether the aftereffect transfers across depth cues (Experiments 3 and 4). Facial surfaces and their morphs to the average face were defined purely by one of shading, texture, motion, or binocular disparity. We obtained identification thresholds for matched (matched identity between adapting and test stimuli), non-matched (non-matched identity between adapting and test stimuli), and no-adaptation (showing only the test stimuli) conditions for each cue and across different depth cues. We found robust face identity aftereffect in both experiments. Our results suggest that depth cues do contribute to forming meaningful face representations that are depth cue invariant. Depth cue invariance would require integration of information across different areas and different pathways for object recognition, and this in turn has important implications for cortical models of visual object recognition.

  20. Assessing the Benefits and Costs of Motion for C-17 Flight Simulators: Technical Appendixes.

    DTIC Science & Technology

    1986-06-01

    Conference, NAECON, 1983. 4’ U-. - 182 - Instructional System Development, AF Manual 50-2, USAF, May 25, 1979. Irish , P.A., and G.H. Buckland, "Effects of...control augmentation system ; (4) the fidelity of different siirulator motion cueing alternatives; (5) a suggested methodology for assessinq the...evaluating the benefits and costs of incorporating motion systems in C-17 transport aircraft flight simulators and in developing a general framework

  1. Simulator Sickness During Emergency Procedures Training in a Helicopter Simulator: Age, Flight Experience, and Amount Learned

    DTIC Science & Technology

    2007-09-01

    Aircrew Training Research Division, Human Resources Directorate. Smart, L. J ., Stoffregen, T. A ., & Bardy , B. G. (2002). Visually induced motion sickness...Aviation, Space, and Environmental Medicine, 60, 1043-1048. Benson, A . J . (1978). Motion sickness. In G. Dhenin & J . Ernsting (Eds.), Aviation Medicine...pp. 468-493). London: Tri-Med Books. Benson, A . J . (1988). Aetiological factors in simulator sickness. In AGARD, Motion cues in flight simulation and

  2. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  3. Software for project-based learning of robot motion planning

    NASA Astrophysics Data System (ADS)

    Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.

    2013-12-01

    Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.

  4. SU-D-17A-02: Four-Dimensional CBCT Using Conventional CBCT Dataset and Iterative Subtraction Algorithm of a Lung Patient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, E; Lasio, G; Yi, B

    2014-06-01

    Purpose: The Iterative Subtraction Algorithm (ISA) method generates retrospectively a pre-selected motion phase cone-beam CT image from the full motion cone-beam CT acquired at standard rotation speed. This work evaluates ISA method with real lung patient data. Methods: The goal of the ISA algorithm is to extract motion and no- motion components form the full reconstruction CBCT. The workflow consists of subtracting from the full CBCT all of the undesired motion phases and obtain a motion de-blurred single-phase CBCT image, followed by iteration of this subtraction process. ISA is realized as follows: 1) The projections are sorted to various phases,more » and from all phases, a full reconstruction is performed to generate an image CTM. 2) Generate forward projections of CTM at the desired phase projection angles, the subtraction of projection and the forward projection will reconstruct a CTSub1, which diminishes the desired phase component. 3) By adding back the CTSub1 to CTm, no motion CBCT, CTS1, can be computed. 4) CTS1 still contains residual motion component. 5) This residual motion component can be further reduced by iteration.The ISA 4DCBCT technique was implemented using Varian Trilogy accelerator OBI system. To evaluate the method, a lung patient CBCT dataset was used. The reconstruction algorithm is FDK. Results: The single phase CBCT reconstruction generated via ISA successfully isolates the desired motion phase from the full motion CBCT, effectively reducing motion blur. It also shows improved image quality, with reduced streak artifacts with respect to the reconstructions from unprocessed phase-sorted projections only. Conclusion: A CBCT motion de-blurring algorithm, ISA, has been developed and evaluated with lung patient data. The algorithm allows improved visualization of a single phase motion extracted from a standard CBCT dataset. This study has been supported by National Institute of Health through R01CA133539.« less

  5. Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters

    PubMed Central

    Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-01-01

    Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046

  6. Linearized motion estimation for articulated planes.

    PubMed

    Datta, Ankur; Sheikh, Yaser; Kanade, Takeo

    2011-04-01

    In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.

  7. Figure-ground segregation can rely on differences in motion direction.

    PubMed

    Kandil, Farid I; Fahle, Manfred

    2004-12-01

    If the elements within a figure move synchronously while those in the surround move at a different time, the figure is easily segregated from the surround and thus perceived. Lee and Blake (1999) [Visual form created solely from temporal structure. Science, 284, 1165-1168] demonstrated that this figure-ground separation may be based not only on time differences between motion onsets, but also on the differences between reversals of motion direction. However, Farid and Adelson (2001) [Synchrony does not promote grouping in temporally structured displays. Nature Neuroscience, 4, 875-876] argued that figure-ground segregation in the motion-reversal experiment might have been based on a contrast artefact and concluded that (a)synchrony as such was 'not responsible for the perception of form in these or earlier displays'. Here, we present experiments that avoid contrast artefacts but still produce figure-ground segregation based on purely temporal cues. Our results show that subjects can segregate figure from ground even though being unable to use motion reversals as such. Subjects detect the figure when either (i) motion stops (leading to contrast artefacts), or (ii) motion directions differ between figure and ground. Segregation requires minimum delays of about 15 ms. We argue that whatever the underlying cues and mechanisms, a second stage beyond motion detection is required to globally compare the outputs of local motion detectors and to segregate figure from ground. Since analogous changes take place in both figure and ground in rapid succession, this second stage has to detect the asynchrony with high temporal precision.

  8. SU-F-J-133: Adaptive Radiation Therapy with a Four-Dimensional Dose Calculation Algorithm That Optimizes Dose Distribution Considering Breathing Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Algan, O; Ahmad, S

    Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less

  9. What a Difference a Parameter Makes: a Psychophysical Comparison of Random Dot Motion Algorithms

    PubMed Central

    Pilly, Praveen K.; Seitz, Aaron R.

    2009-01-01

    Random dot motion (RDM) displays have emerged as one of the standard stimulus types employed in psychophysical and physiological studies of motion processing. RDMs are convenient because it is straightforward to manipulate the relative motion energy for a given motion direction in addition to stimulus parameters such as the speed, contrast, duration, density, aperture, etc. However, as widely as RDMs are employed so do they vary in their details of implementation. As a result, it is often difficult to make direct comparisons across studies employing different RDM algorithms and parameters. Here, we systematically measure the ability of human subjects to estimate motion direction for four commonly used RDM algorithms under a range of parameters in order to understand how these different algorithms compare in their perceptibility. We find that parametric and algorithmic differences can produce dramatically different performances. These effects, while surprising, can be understood in relationship to pertinent neurophysiological data regarding spatiotemporal displacement tuning properties of cells in area MT and how the tuning function changes with stimulus contrast and retinal eccentricity. These data help give a baseline by which different RDM algorithms can be compared, demonstrate a need for clearly reporting RDM details in the methods of papers, and also pose new constraints and challenges to models of motion direction processing. PMID:19336240

  10. Global velocity constrained cloud motion prediction for short-term solar forecasting

    NASA Astrophysics Data System (ADS)

    Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping

    2016-09-01

    Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.

  11. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  12. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  13. Usability of Three-dimensional Augmented Visual Cues Delivered by Smart Glasses on (Freezing of) Gait in Parkinson’s Disease

    PubMed Central

    Janssen, Sabine; Bolte, Benjamin; Nonnekes, Jorik; Bittner, Marian; Bloem, Bastiaan R.; Heida, Tjitske; Zhao, Yan; van Wezel, Richard J. A.

    2017-01-01

    External cueing is a potentially effective strategy to reduce freezing of gait (FOG) in persons with Parkinson’s disease (PD). Case reports suggest that three-dimensional (3D) cues might be more effective in reducing FOG than two-dimensional cues. We investigate the usability of 3D augmented reality visual cues delivered by smart glasses in comparison to conventional 3D transverse bars on the floor and auditory cueing via a metronome in reducing FOG and improving gait parameters. In laboratory experiments, 25 persons with PD and FOG performed walking tasks while wearing custom-made smart glasses under five conditions, at the end-of-dose. For two conditions, augmented visual cues (bars/staircase) were displayed via the smart glasses. The control conditions involved conventional 3D transverse bars on the floor, auditory cueing via a metronome, and no cueing. The number of FOG episodes and percentage of time spent on FOG were rated from video recordings. The stride length and its variability, cycle time and its variability, cadence, and speed were calculated from motion data collected with a motion capture suit equipped with 17 inertial measurement units. A total of 300 FOG episodes occurred in 19 out of 25 participants. There were no statistically significant differences in number of FOG episodes and percentage of time spent on FOG across the five conditions. The conventional bars increased stride length, cycle time, and stride length variability, while decreasing cadence and speed. No effects for the other conditions were found. Participants preferred the metronome most, and the augmented staircase least. They suggested to improve the comfort, esthetics, usability, field of view, and stability of the smart glasses on the head and to reduce their weight and size. In their current form, augmented visual cues delivered by smart glasses are not beneficial for persons with PD and FOG. This could be attributable to distraction, blockage of visual feedback, insufficient familiarization with the smart glasses, or display of the visual cues in the central rather than peripheral visual field. Future smart glasses are required to be more lightweight, comfortable, and user friendly to avoid distraction and blockage of sensory feedback, thus increasing usability. PMID:28659862

  14. Seeing the world topsy-turvy: The primary role of kinematics in biological motion inversion effects.

    PubMed

    Fitzgerald, Sue-Anne; Brooks, Anna; van der Zwan, Rick; Blair, Duncan

    2014-01-01

    Physical inversion of whole or partial human body representations typically has catastrophic consequences on the observer's ability to perform visual processing tasks. Explanations usually focus on the effects of inversion on the visual system's ability to exploit configural or structural relationships, but more recently have also implicated motion or kinematic cue processing. Here, we systematically tested the role of both on perceptions of sex from upright and inverted point-light walkers. Our data suggest that inversion results in systematic degradations of the processing of kinematic cues. Specifically and intriguingly, they reveal sex-based kinematic differences: Kinematics characteristic of females generally are resistant to inversion effects, while those of males drive systematic sex misperceptions. Implications of the findings are discussed.

  15. SU-F-J-77: Variations in the Displacement Vector Fields Calculated by Different Deformable Image Registration Algorithms Used in Helical, Axial and Cone-Beam CT Images of a Mobile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Jaskowiak, J; Ahmad, S

    Purpose: To investigate quantitatively the displacement-vector-fields (DVF) obtained from different deformable image registration algorithms (DIR) in helical (HCT), axial (ACT) and cone-beam CT (CBCT) to register CT images of a mobile phantom and its correlation with motion amplitudes and frequencies. Methods: HCT, ACT and CBCT are used to image a mobile phantom which includes three targets with different sizes that are manufactured from water-equivalent material and embedded in low density foam. The phantom is moved with controlled motion patterns where a range of motion amplitudes (0–40mm) and frequencies (0.125–0.5Hz) are used. The CT images obtained from scanning of the mobilemore » phantom are registered with the stationary CT-images using four deformable image registration algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from DIRART software. Results: The DVF calculated by the different algorithms correlate well with the motion amplitudes that are applied on the mobile phantom where maximal DVF increase linearly with the motion amplitudes of the mobile phantom in CBCT. Similarly in HCT, DVF increase linearly with motion amplitude, however, its correlation is weaker than CBCT. In ACT, the DVF’s do not correlate well with the motion amplitudes where motion induces strong image artifacts and DIR algorithms are not able to deform the ACT image of the mobile targets to the stationary targets. Three DIR-algorithms produce comparable values and patterns of the DVF for certain CT imaging modality. However, DVF from fast-demons deviated strongly from other algorithms at large motion amplitudes. Conclusion: In CBCT and HCT, the DVF correlate well with the motion amplitude of the mobile phantom. However, in ACT, DVF do not correlate with motion amplitudes. Correlations of DVF with motion amplitude as in CBCT and HCT imaging techniques can provide information about unknown motion parameters of the mobile organs in real patients as demonstrated in this phantom visibility study.« less

  16. Biological Motion Primes the Animate/Inanimate Distinction in Infancy

    PubMed Central

    Poulin-Dubois, Diane; Crivello, Cristina; Wright, Kristyn

    2015-01-01

    Given that biological motion is both detected and preferred early in life, we tested the hypothesis that biological motion might be instrumental to infants’ differentiation of animate and inanimate categories. Infants were primed with either point-light displays of realistic biological motion, random motion, or schematic biological motion of an unfamiliar shape. After being habituated to these displays, 12-month-old infants categorized animals and vehicles as well as furniture and vehicles with the sequential touching task. The findings indicated that infants primed with point-light displays of realistic biological motion showed better categorization of animates than those exposed to random or schematic biological motion. These results suggest that human biological motion might be one of the motion cues that provide the building blocks for infants’ concept of animacy. PMID:25659077

  17. Studies of human dynamic space orientation using techniques of control theory

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1974-01-01

    Studies of human orientation and manual control in high order systems are summarized. Data cover techniques for measuring and altering orientation perception, role of non-visual motion sensors, particularly the vestibular and tactile sensors, use of motion cues in closed loop control of simple stable and unstable systems, and advanced computer controlled display systems.

  18. Gait parameter control timing with dynamic manual contact or visual cues

    PubMed Central

    Shi, Peter; Werner, William

    2016-01-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. PMID:26936979

  19. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  20. Heading Tuning in Macaque Area V6.

    PubMed

    Fan, Reuben H; Liu, Sheng; DeAngelis, Gregory C; Angelaki, Dora E

    2015-12-16

    Cortical areas, such as the dorsal subdivision of the medial superior temporal area (MSTd) and the ventral intraparietal area (VIP), have been shown to integrate visual and vestibular self-motion signals. Area V6 is interconnected with areas MSTd and VIP, allowing for the possibility that V6 also integrates visual and vestibular self-motion cues. An alternative hypothesis in the literature is that V6 does not use these sensory signals to compute heading but instead discounts self-motion signals to represent object motion. However, the responses of V6 neurons to visual and vestibular self-motion cues have never been studied, thus leaving the functional roles of V6 unclear. We used a virtual reality system to examine the 3D heading tuning of macaque V6 neurons in response to optic flow and inertial motion stimuli. We found that the majority of V6 neurons are selective for heading defined by optic flow. However, unlike areas MSTd and VIP, V6 neurons are almost universally unresponsive to inertial motion in the absence of optic flow. We also explored the spatial reference frames of heading signals in V6 by measuring heading tuning for different eye positions, and we found that the visual heading tuning of most V6 cells was eye-centered. Similar to areas MSTd and VIP, the population of V6 neurons was best able to discriminate small variations in heading around forward and backward headings. Our findings support the idea that V6 is involved primarily in processing visual motion signals and does not appear to play a role in visual-vestibular integration for self-motion perception. To understand how we successfully navigate our world, it is important to understand which parts of the brain process cues used to perceive our direction of self-motion (i.e., heading). Cortical area V6 has been implicated in heading computations based on human neuroimaging data, but direct measurements of heading selectivity in individual V6 neurons have been lacking. We provide the first demonstration that V6 neurons carry 3D visual heading signals, which are represented in an eye-centered reference frame. In contrast, we found almost no evidence for vestibular heading signals in V6, indicating that V6 is unlikely to contribute to multisensory integration of heading signals, unlike other cortical areas. These findings provide important constraints on the roles of V6 in self-motion perception. Copyright © 2015 the authors 0270-6474/15/3516303-12$15.00/0.

  1. Late development of cue integration is linked to sensory fusion in cortex.

    PubMed

    Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko

    2015-11-02

    Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex

    PubMed Central

    Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko

    2015-01-01

    Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841

  3. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  4. Simulated pain and cervical motion in patients with chronic disorders of the cervical spine.

    PubMed

    Dvir, Zeevi; Gal-Eshel, Noga; Shamir, Boaz; Pevzner, Evgeny; Peretz, Chava; Knoller, Nachshon

    2004-01-01

    The primary objective of the present study was to determine how simulated severe cervical pain affects cervical motion in patients suffering from two distinct chronic cervical disorders: whiplash (n=25) and degenerative changes (n=25). The second objective was to derive an index that would allow the differentiation of maximal from submaximal performances of cervical range of motion. Patients first performed maximal movement of the head (maximal effort) in each of the six primary directions and then repeated the test as if they were suffering from a much more intense level of pain (submaximal effort). All measurements were repeated within four to seven days. In both groups, there was significant compression of cervical motion during the submaximal effort. This compression was also highly stable on a test-retest basis. In both groups, a significantly higher average coefficient of variation was associated with the imagined pain and it was significantly different between the two clinical groups. In the whiplash group, a logistic regression model allowed the derivation of coefficient of variation-based cutoff scores that might, at selected levels of probability and an individual level, identify chronic whiplash patients who intentionally magnify their motion restriction using pain as a cue. However, the relatively small and very stable compression of cervical motion under pain simulation supports the view that the likelihood that chronic whiplash patients are magnifying their restriction of cervical range of motion using pain as a cue is very low.

  5. Algorithm for the stabilization of motion a bounding vehicle in the flight phase

    NASA Technical Reports Server (NTRS)

    Lapshin, V. V.

    1980-01-01

    The unsupported phase of motion of a multileg bounding vehicle is examined. An algorithm for stabilization of the angular motion of the vehicle housing by change of the motion of the legs during flight is constructed. The results of mathematical modelling of the stabilization process by computer are presented.

  6. Inquiry style interactive virtual experiments: a case on circular motion

    NASA Astrophysics Data System (ADS)

    Zhou, Shaona; Han, Jing; Pelz, Nathaniel; Wang, Xiaojun; Peng, Liangyu; Xiao, Hua; Bao, Lei

    2011-11-01

    Interest in computer-based learning, especially in the use of virtual reality simulations is increasing rapidly. While there are good reasons to believe that technologies have the potential to improve teaching and learning, how to utilize the technology effectively in teaching specific content difficulties is challenging. To help students develop robust understandings of correct physics concepts, we have developed interactive virtual experiment simulations that have the unique feature of enabling students to experience force and motion via an analogue joystick, allowing them to feel the applied force and simultaneously see its effects. The simulations provide students learning experiences that integrate both scientific representations and low-level sensory cues such as haptic cues under a single setting. In this paper, we introduce a virtual experiment module on circular motion. A controlled study has been conducted to evaluate the impact of using this virtual experiment on students' learning of force and motion in the context of circular motion. The results show that the interactive virtual experiment method is preferred by students and is more effective in helping students grasp the physics concepts than the traditional education method such as problem-solving practices. Our research suggests that well-developed interactive virtual experiments can be useful tools in teaching difficult concepts in science.

  7. Rendezvous Docking Simulator

    NASA Image and Video Library

    1964-10-29

    Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. "The LEM pilot's compartment, with overhead window and the docking ring (idealized since the pilot cannot see it during the maneuvers), is shown docked with the full-scale Apollo Command Module." A.W. Vogeley described the simulator as follows: "The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect." -- Published in A.W. Vogeley, "Piloted Space-Flight Simulation at Langley Research Center," Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966;

  8. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras

    PubMed Central

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-01

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144

  9. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    PubMed

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  10. Multimodal Integration of Self-Motion Cues in the Vestibular System: Active versus Passive Translations

    PubMed Central

    Carriot, Jerome; Brooks, Jessica X.

    2013-01-01

    The ability to keep track of where we are going as we navigate through our environment requires knowledge of our ongoing location and orientation. In response to passively applied motion, the otolith organs of the vestibular system encode changes in the velocity and direction of linear self-motion (i.e., heading). When self-motion is voluntarily generated, proprioceptive and motor efference copy information is also available to contribute to the brain's internal representation of current heading direction and speed. However to date, how the brain integrates these extra-vestibular cues with otolith signals during active linear self-motion remains unknown. Here, to address this question, we compared the responses of macaque vestibular neurons during active and passive translations. Single-unit recordings were made from a subgroup of neurons at the first central stage of sensory processing in the vestibular pathways involved in postural control and the computation of self-motion perception. Neurons responded far less robustly to otolith stimulation during self-generated than passive head translations. Yet, the mechanism underlying the marked cancellation of otolith signals did not affect other characteristics of neuronal responses (i.e., baseline firing rate, tuning ratio, orientation of maximal sensitivity vector). Transiently applied perturbations during active motion further established that an otolith cancellation signal was only gated in conditions where proprioceptive sensory feedback matched the motor-based expectation. Together our results have important implications for understanding the brain's ability to ensure accurate postural and motor control, as well as perceptual stability, during active self-motion. PMID:24336720

  11. Perception-based synthetic cueing for night vision device rotorcraft hover operations

    NASA Astrophysics Data System (ADS)

    Bachelder, Edward N.; McRuer, Duane

    2002-08-01

    Helicopter flight using night-vision devices (NVDs) is difficult to perform, as evidenced by the high accident rate associated with NVD flight compared to day operation. The approach proposed in this paper is to augment the NVD image with synthetic cueing, whereby the cues would emulate position and motion and appear to be actually occurring in physical space on which they are overlaid. Synthetic cues allow for selective enhancement of perceptual state gains to match the task requirements. A hover cue set was developed based on an analogue of a physical target used in a flight handling qualities tracking task, a perceptual task analysis for hover, and fundamentals of human spatial perception. The display was implemented on a simulation environment, constructed using a virtual reality device, an ultrasound head-tracker, and a fixed-base helicopter simulator. Seven highly trained helicopter pilots were used as experimental subjects and tasked to maintain hover in the presence of aircraft positional disturbances while viewing a synthesized NVD environment and the experimental hover cues. Significant performance improvements were observed when using synthetic cue augmentation. This paper demonstrates that artificial magnification of perceptual states through synthetic cueing can be an effective method of improving night-vision helicopter hover operations.

  12. Cue-recruitment for extrinsic signals after training with low information stimuli.

    PubMed

    Jain, Anshul; Fuller, Stuart; Backus, Benjamin T

    2014-01-01

    Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.

  13. Planning and delivery of four-dimensional radiation therapy with multileaf collimators

    NASA Astrophysics Data System (ADS)

    McMahon, Ryan L.

    This study is an investigation of the application of multileaf collimators (MLCs) to the treatment of moving anatomy with external beam radiation therapy. First, a method for delivering intensity modulated radiation therapy (IMRT) to moving tumors is presented. This method uses an MLC control algorithm that calculates appropriate MLC leaf speeds in response to feedback from real-time imaging. The algorithm does not require a priori knowledge of a tumor's motion, and is based on the concept of self-correcting DMLC leaf trajectories . This gives the algorithm the distinct advantage of allowing for correction of DMLC delivery errors without interrupting delivery. The algorithm is first tested for the case of one-dimensional (1D) rigid tumor motion in the beam's eye view (BEV). For this type of motion, it is shown that the real-time tracking algorithm results in more accurate deliveries, with respect to delivered intensity, than those which ignore motion altogether. This is followed by an appropriate extension of the algorithm to two-dimensional (2D) rigid motion in the BEV. For this type of motion, it is shown that the 2D real-time tracking algorithm results in improved accuracy (in the delivered intensity) in comparison to deliveries which ignore tumor motion or only account for tumor motion which is aligned with MLC leaf travel. Finally, a method is presented for designing DMLC leaf trajectories which deliver a specified intensity over a moving tumor without overexposing critical structures which exhibit motion patterns that differ from that of the tumor. In addition to avoiding overexposure of critical organs, the method can, in the case shown, produce deliveries that are superior to anything achievable using stationary anatomy. In this regard, the method represents a systematic way to include anatomical motion as a degree of freedom in the optimization of IMRT while producing treatment plans that are deliverable with currently available technology. These results, combined with those related to the real-time MLC tracking algorithm, show that an MLC is a promising tool to investigate for the delivery of four-dimensional radiation therapy.

  14. A hardware-oriented concurrent TZ search algorithm for High-Efficiency Video Coding

    NASA Astrophysics Data System (ADS)

    Doan, Nghia; Kim, Tae Sung; Rhee, Chae Eun; Lee, Hyuk-Jae

    2017-12-01

    High-Efficiency Video Coding (HEVC) is the latest video coding standard, in which the compression performance is double that of its predecessor, the H.264/AVC standard, while the video quality remains unchanged. In HEVC, the test zone (TZ) search algorithm is widely used for integer motion estimation because it effectively searches the good-quality motion vector with a relatively small amount of computation. However, the complex computation structure of the TZ search algorithm makes it difficult to implement it in the hardware. This paper proposes a new integer motion estimation algorithm which is designed for hardware execution by modifying the conventional TZ search to allow parallel motion estimations of all prediction unit (PU) partitions. The algorithm consists of the three phases of zonal, raster, and refinement searches. At the beginning of each phase, the algorithm obtains the search points required by the original TZ search for all PU partitions in a coding unit (CU). Then, all redundant search points are removed prior to the estimation of the motion costs, and the best search points are then selected for all PUs. Compared to the conventional TZ search algorithm, experimental results show that the proposed algorithm significantly decreases the Bjøntegaard Delta bitrate (BD-BR) by 0.84%, and it also reduces the computational complexity by 54.54%.

  15. Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.

    PubMed

    Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2017-06-01

    Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.

  16. Two novel motion-based algorithms for surveillance video analysis on embedded platforms

    NASA Astrophysics Data System (ADS)

    Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.

    2010-05-01

    This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.

  17. Create your own stimulus: Manipulating movements according to social categories

    PubMed Central

    Koppensteiner, Markus; Primes, Georg; Stephan, Pia

    2017-01-01

    People ascribe purposeful behaviour to the movements of artificial objects and social qualities to human body motion. We investigated how people associate simple motion cues with social categories. For a first rating-experiment we converted the body movements of speakers into stick-figure animations; for a second rating-experiment we used animations of one single dot. Rating-experiments were “reversed” because we asked participants to alter the movements (i.e., vertical amplitude, horizontal amplitude, and velocity) of the stimuli according to different instructions (e.g., create a stimulus of high dominance). Participants equipped stick figures and dot animations with expansive movements to represent high dominance. Expansive and fast movements (i.e., high velocity) were mainly associated with high aggressiveness. Fast movements were also associated with low friendliness, low trustworthiness, and low competence. Overall, patterns found for stick figure and dot animations were similar indicating that certain motion cues convey social information even when only a dot and no body form is visible. The “reverse approach” we propose here makes the impact of different components directly observable. The data generated by this method offers better insights into the interplay of these components and the ways in which they form meaningful patterns. The proposed method can be extended to other types of nonverbal cues and a variety of social categories. PMID:28339490

  18. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  19. Differential effect of visual motion adaption upon visual cortical excitability.

    PubMed

    Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer

    2017-03-01

    The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.

  20. Novel method of extracting motion from natural movies.

    PubMed

    Suzuki, Wataru; Ichinohe, Noritaka; Tani, Toshiki; Hayami, Taku; Miyakawa, Naohisa; Watanabe, Satoshi; Takeichi, Hiroshige

    2017-11-01

    The visual system in primates can be segregated into motion and shape pathways. Interaction occurs at multiple stages along these pathways. Processing of shape-from-motion and biological motion is considered to be a higher-order integration process involving motion and shape information. However, relatively limited types of stimuli have been used in previous studies on these integration processes. We propose a new algorithm to extract object motion information from natural movies and to move random dots in accordance with the information. The object motion information is extracted by estimating the dynamics of local normal vectors of the image intensity projected onto the x-y plane of the movie. An electrophysiological experiment on two adult common marmoset monkeys (Callithrix jacchus) showed that the natural and random dot movies generated with this new algorithm yielded comparable neural responses in the middle temporal visual area. In principle, this algorithm provided random dot motion stimuli containing shape information for arbitrary natural movies. This new method is expected to expand the neurophysiological and psychophysical experimental protocols to elucidate the integration processing of motion and shape information in biological systems. The novel algorithm proposed here was effective in extracting object motion information from natural movies and provided new motion stimuli to investigate higher-order motion information processing. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  1. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  2. Bayesian integration of position and orientation cues in perception of biological and non-biological forms.

    PubMed

    Thurman, Steven M; Lu, Hongjing

    2014-01-01

    Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.

  3. Evaluation of g seat augmentation of fixed-base/moving base simulation for transport landings under two visually imposed runway width conditions

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1983-01-01

    Vertical-motion cues supplied by a g-seat to augment platform motion cues in the other five degrees of freedom were evaluated in terms of their effect on objective performance measures obtained during simulated transport landings under visual conditions. In addition to evaluating the effects of the vertical cueing, runway width and magnification effects were investigated. The g-seat was evaluated during fixed base and moving-base operations. Although performance with the g-seat only improved slightly over that with fixed-base operation, combined g-seat platform operation showed no improvement over improvement over platform-only operation. When one runway width at one magnification factor was compared with another width at a different factor, the visual results indicated that the runway width probably had no effect on pilot-vehicle performance. The new performance differences that were detected may be more readily attributed to the extant (existing throughout) increase in vertical velocity induced by the magnification factor used to change the runway width, rather than to the width itself.

  4. Modeling Fault Diagnosis Performance on a Marine Powerplant Simulator.

    DTIC Science & Technology

    1985-08-01

    two definitions are very similar. They emphasize that fidelity is a two dimensional -:oncept. They also pointed out the measurement prob- lems. Tasks...simulator duplicares cne enscr-: ztimulation, 4. . rnamic motion cues , visual :ues, ec. ?svcno ogicai fidelity is simply the degree to which the trainee...functions is only acceptable if the performance is paced by tne system, i.e., cues from the system serve to initiate elementary, skilled sub-routines

  5. The Effect of Perceptual Load on Attention-Induced Motion Blindness: The Efficiency of Selective Inhibition

    ERIC Educational Resources Information Center

    Hay, Julia L.; Milders, Maarten M.; Sahraie, Arash; Niedeggen, Michael

    2006-01-01

    Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target…

  6. Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.

    PubMed

    Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico

    2014-01-01

    Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.

  7. Seeing the world topsy-turvy: The primary role of kinematics in biological motion inversion effects

    PubMed Central

    Fitzgerald, Sue-Anne; Brooks, Anna; van der Zwan, Rick; Blair, Duncan

    2014-01-01

    Physical inversion of whole or partial human body representations typically has catastrophic consequences on the observer's ability to perform visual processing tasks. Explanations usually focus on the effects of inversion on the visual system's ability to exploit configural or structural relationships, but more recently have also implicated motion or kinematic cue processing. Here, we systematically tested the role of both on perceptions of sex from upright and inverted point-light walkers. Our data suggest that inversion results in systematic degradations of the processing of kinematic cues. Specifically and intriguingly, they reveal sex-based kinematic differences: Kinematics characteristic of females generally are resistant to inversion effects, while those of males drive systematic sex misperceptions. Implications of the findings are discussed. PMID:25469217

  8. ECG-gated interventional cardiac reconstruction for non-periodic motion.

    PubMed

    Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim

    2010-01-01

    The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.

  9. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view.

    PubMed

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-09-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated. However, the algorithm presented is robust in the sense that it does not rely on a high level of agreement between the target motion measured during treatment planning and delivery.

  10. Cardiac motion correction based on partial angle reconstructed images in x-ray CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr

    2015-05-15

    Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogrammore » with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of two conjugate PAR images. To evaluate the proposed algorithm, digital XCAT and physical dynamic cardiac phantom datasets are used. The XCAT phantom datasets were generated with heart rates of 70 and 100 bpm, respectively, by assuming a system rotation time of 300 ms. A physical dynamic cardiac phantom was scanned using a slowly rotating XCT system so that the effective heart rate will be 70 bpm for a system rotation speed of 300 ms. Results: In the XCAT phantom experiment, motion-compensated 3D images obtained from the proposed algorithm show coronary arteries with fewer motion artifacts for all phases. Moreover, object boundaries contaminated by motion are well restored. Even though object positions and boundary shapes are still somewhat different from the ground truth in some cases, the authors see that visibilities of coronary arteries are improved noticeably and motion artifacts are reduced considerably. The physical phantom study also shows that the visual quality of motion-compensated images is greatly improved. Conclusions: The authors propose a novel PAR image-based cardiac motion estimation and compensation algorithm. The algorithm requires an angular scan range of less than 360°. The excellent performance of the proposed algorithm is illustrated by using digital XCAT and physical dynamic cardiac phantom datasets.« less

  11. Modification of Motion Perception and Manual Control Following Short-Durations Spaceflight

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Vanya, R. D.; Esteves, J. T.; Rupert, A. H.; Clement, G.

    2011-01-01

    Adaptive changes during space flight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination and spatial disorientation following G-transitions. This ESA-NASA study was designed to examine both the physiological basis and operational implications for disorientation and tilt-translation disturbances following short-duration spaceflights. The goals of this study were to (1) examine the effects of stimulus frequency on adaptive changes in motion perception during passive tilt and translation motion, (2) quantify decrements in manual control of tilt motion, and (3) evaluate vibrotactile feedback as a sensorimotor countermeasure.

  12. Improving Pulse Rate Measurements during Random Motion Using a Wearable Multichannel Reflectance Photoplethysmograph.

    PubMed

    Warren, Kristen M; Harvey, Joshua R; Chon, Ki H; Mendelson, Yitzhak

    2016-03-07

    Photoplethysmographic (PPG) waveforms are used to acquire pulse rate (PR) measurements from pulsatile arterial blood volume. PPG waveforms are highly susceptible to motion artifacts (MA), limiting the implementation of PR measurements in mobile physiological monitoring devices. Previous studies have shown that multichannel photoplethysmograms can successfully acquire diverse signal information during simple, repetitive motion, leading to differences in motion tolerance across channels. In this paper, we investigate the performance of a custom-built multichannel forehead-mounted photoplethysmographic sensor under a variety of intense motion artifacts. We introduce an advanced multichannel template-matching algorithm that chooses the channel with the least motion artifact to calculate PR for each time instant. We show that for a wide variety of random motion, channels respond differently to motion artifacts, and the multichannel estimate outperforms single-channel estimates in terms of motion tolerance, signal quality, and PR errors. We have acquired 31 data sets consisting of PPG waveforms corrupted by random motion and show that the accuracy of PR measurements achieved was increased by up to 2.7 bpm when the multichannel-switching algorithm was compared to individual channels. The percentage of PR measurements with error ≤ 5 bpm during motion increased by 18.9% when the multichannel switching algorithm was compared to the mean PR from all channels. Moreover, our algorithm enables automatic selection of the best signal fidelity channel at each time point among the multichannel PPG data.

  13. Use of a machine learning algorithm to classify expertise: analysis of hand motion patterns during a simulated surgical task.

    PubMed

    Watson, Robert A

    2014-08-01

    To test the hypothesis that machine learning algorithms increase the predictive power to classify surgical expertise using surgeons' hand motion patterns. In 2012 at the University of North Carolina at Chapel Hill, 14 surgical attendings and 10 first- and second-year surgical residents each performed two bench model venous anastomoses. During the simulated tasks, the participants wore an inertial measurement unit on the dorsum of their dominant (right) hand to capture their hand motion patterns. The pattern from each bench model task performed was preprocessed into a symbolic time series and labeled as expert (attending) or novice (resident). The labeled hand motion patterns were processed and used to train a Support Vector Machine (SVM) classification algorithm. The trained algorithm was then tested for discriminative/predictive power against unlabeled (blinded) hand motion patterns from tasks not used in the training. The Lempel-Ziv (LZ) complexity metric was also measured from each hand motion pattern, with an optimal threshold calculated to separately classify the patterns. The LZ metric classified unlabeled (blinded) hand motion patterns into expert and novice groups with an accuracy of 70% (sensitivity 64%, specificity 80%). The SVM algorithm had an accuracy of 83% (sensitivity 86%, specificity 80%). The results confirmed the hypothesis. The SVM algorithm increased the predictive power to classify blinded surgical hand motion patterns into expert versus novice groups. With further development, the system used in this study could become a viable tool for low-cost, objective assessment of procedural proficiency in a competency-based curriculum.

  14. Complex motion measurement using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Jianjun; Tu, Dan; Shen, Zhenkang

    1997-12-01

    Genetic algorithm (GA) is an optimization technique that provides an untraditional approach to deal with many nonlinear, complicated problems. The notion of motion measurement using genetic algorithm arises from the fact that the motion measurement is virtually an optimization process based on some criterions. In the paper, we propose a complex motion measurement method using genetic algorithm based on block-matching criterion. The following three problems are mainly discussed and solved in the paper: (1) apply an adaptive method to modify the control parameters of GA that are critical to itself, and offer an elitism strategy at the same time (2) derive an evaluate function of motion measurement for GA based on block-matching technique (3) employ hill-climbing (HC) method hybridly to assist GA's search for the global optimal solution. Some other related problems are also discussed. At the end of paper, experiments result is listed. We employ six motion parameters for measurement in our experiments. Experiments result shows that the performance of our GA is good. The GA can find the object motion accurately and rapidly.

  15. Application and assessment of a robust elastic motion correction algorithm to dynamic MRI.

    PubMed

    Herrmann, K-H; Wurdinger, S; Fischer, D R; Krumbein, I; Schmitt, M; Hermosillo, G; Chaudhuri, K; Krishnan, A; Salganicoff, M; Kaiser, W A; Reichenbach, J R

    2007-01-01

    The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).

  16. SU-E-J-115: Correlation of Displacement Vector Fields Calculated by Deformable Image Registration Algorithms with Motion Parameters of CT Images with Well-Defined Targets and Controlled-Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaskowiak, J; Ahmad, S; Ali, I

    Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were usedmore » to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased significantly and that limited image quality and poor correlation between the motion amplitude and DVF was obtained.« less

  17. Revised motion estimation algorithm for PROPELLER MRI.

    PubMed

    Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R

    2014-08-01

    To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.

  18. The Influence of Motion Cues on Driver-Vehicle Performance in a Simulator

    NASA Technical Reports Server (NTRS)

    Repa, B. S.; Leucht, P. M.; Wierwille, W. W.

    1981-01-01

    Four different motion base configurations were studied on driving simulator. Differently responding vehicles were simulated on each motion configurations and the effects of the vehicle characteristics on driver vehicle system performance, driver control activity, and driver opinion ratings of vehicle performance during driving are compared for different motion configurations. Data show that: (1)) the effects of changes in vehicle characteristics on the different objective and subjective measures of driver vehicle performance are not disguised by the lack of physical motion; (2) fixed base simulator can be used to draw inferences despite the lack of motion; (3) the presence of motion tends to reduce path keeping errors and driver control activity; (4) roll and yaw motions are recommended because of their marked influence on driver vehicle performance (5) the importance of motion increases as the driving maneuvers become more extreme.

  19. Influence of Vibrotactile Feedback on Controlling Tilt Motion After Spaceflight

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Rupert, A. H.; Vanya, R. D.; Esteves, J. T.; Clement, G.

    2011-01-01

    We hypothesize that adaptive changes in how inertial cues from the vestibular system are integrated with other sensory information leads to perceptual disturbances and impaired manual control following transitions between gravity environments. The primary goals of this ongoing post-flight investigation are to quantify decrements in manual control of tilt motion following short-duration spaceflight and to evaluate vibrotactile feedback of tilt as a sensorimotor countermeasure. METHODS. Data is currently being collected on 9 astronaut subjects during 3 preflight sessions and during the first 8 days after Shuttle landings. Variable radius centrifugation (216 deg/s, <20 cm radius) in a darkened room is utilized to elicit otolith reflexes in the lateral plane without concordant canal or visual cues. A Tilt-Translation Sled (TTS) is capable of synchronizing pitch tilt with fore-aft translation to align the resultant gravitoinertial vector with the longitudinal body axis, thereby eliciting canal reflexes without concordant otolith or visual cues. A simple 4 tactor system was implemented to provide feedback when tilt position exceeded predetermined levels in either device. Closed-loop nulling tasks are performed during random tilt steps or sum-of-sines (TTS only) with and without vibrotactile feedback of chair position. RESULTS. On landing day the manual control performance without vibrotactile feedback was reduced by >30% based on the gain or the amount of tilt disturbance successfully nulled. Manual control performance tended to return to baseline levels within 1-2 days following landing. Root-mean-square position error and tilt velocity were significantly reduced with vibrotactile feedback. CONCLUSIONS. These preliminary results are consistent with our hypothesis that adaptive changes in vestibular processing corresponds to reduced manual control performance following G-transitions. A simple vibrotactile prosthesis improves the ability to null out tilt motion within a limited range of motion disturbances.

  20. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less

  1. Simultaneous motion estimation and image reconstruction (SMEIR) for 4D cone-beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jing; Gu, Xuejun

    2013-10-15

    Purpose: Image reconstruction and motion model estimation in four-dimensional cone-beam CT (4D-CBCT) are conventionally handled as two sequential steps. Due to the limited number of projections at each phase, the image quality of 4D-CBCT is degraded by view aliasing artifacts, and the accuracy of subsequent motion modeling is decreased by the inferior 4D-CBCT. The objective of this work is to enhance both the image quality of 4D-CBCT and the accuracy of motion model estimation with a novel strategy enabling simultaneous motion estimation and image reconstruction (SMEIR).Methods: The proposed SMEIR algorithm consists of two alternating steps: (1) model-based iterative image reconstructionmore » to obtain a motion-compensated primary CBCT (m-pCBCT) and (2) motion model estimation to obtain an optimal set of deformation vector fields (DVFs) between the m-pCBCT and other 4D-CBCT phases. The motion-compensated image reconstruction is based on the simultaneous algebraic reconstruction technique (SART) coupled with total variation minimization. During the forward- and backprojection of SART, measured projections from an entire set of 4D-CBCT are used for reconstruction of the m-pCBCT by utilizing the updated DVF. The DVF is estimated by matching the forward projection of the deformed m-pCBCT and measured projections of other phases of 4D-CBCT. The performance of the SMEIR algorithm is quantitatively evaluated on a 4D NCAT phantom. The quality of reconstructed 4D images and the accuracy of tumor motion trajectory are assessed by comparing with those resulting from conventional sequential 4D-CBCT reconstructions (FDK and total variation minimization) and motion estimation (demons algorithm). The performance of the SMEIR algorithm is further evaluated by reconstructing a lung cancer patient 4D-CBCT.Results: Image quality of 4D-CBCT is greatly improved by the SMEIR algorithm in both phantom and patient studies. When all projections are used to reconstruct a 3D-CBCT by FDK, motion-blurring artifacts are present, leading to a 24.4% relative reconstruction error in the NACT phantom. View aliasing artifacts are present in 4D-CBCT reconstructed by FDK from 20 projections, with a relative error of 32.1%. When total variation minimization is used to reconstruct 4D-CBCT, the relative error is 18.9%. Image quality of 4D-CBCT is substantially improved by using the SMEIR algorithm and relative error is reduced to 7.6%. The maximum error (MaxE) of tumor motion determined from the DVF obtained by demons registration on a FDK-reconstructed 4D-CBCT is 3.0, 2.3, and 7.1 mm along left–right (L-R), anterior–posterior (A-P), and superior–inferior (S-I) directions, respectively. From the DVF obtained by demons registration on 4D-CBCT reconstructed by total variation minimization, the MaxE of tumor motion is reduced to 1.5, 0.5, and 5.5 mm along L-R, A-P, and S-I directions. From the DVF estimated by SMEIR algorithm, the MaxE of tumor motion is further reduced to 0.8, 0.4, and 1.5 mm along L-R, A-P, and S-I directions, respectively.Conclusions: The proposed SMEIR algorithm is able to estimate a motion model and reconstruct motion-compensated 4D-CBCT. The SMEIR algorithm improves image reconstruction accuracy of 4D-CBCT and tumor motion trajectory estimation accuracy as compared to conventional sequential 4D-CBCT reconstruction and motion estimation.« less

  2. Modeling heading and path perception from optic flow in the case of independently moving objects

    PubMed Central

    Raudies, Florian; Neumann, Heiko

    2013-01-01

    Humans are usually accurate when estimating heading or path from optic flow, even in the presence of independently moving objects (IMOs) in an otherwise rigid scene. To invoke significant biases in perceived heading, IMOs have to be large and obscure the focus of expansion (FOE) in the image plane, which is the point of approach. For the estimation of path during curvilinear self-motion no significant biases were found in the presence of IMOs. What makes humans robust in their estimation of heading or path using optic flow? We derive analytical models of optic flow for linear and curvilinear self-motion using geometric scene models. Heading biases of a linear least squares method, which builds upon these analytical models, are large, larger than those reported for humans. This motivated us to study segmentation cues that are available from optic flow. We derive models of accretion/deletion, expansion/contraction, acceleration/deceleration, local spatial curvature, and local temporal curvature, to be used as cues to segment an IMO from the background. Integrating these segmentation cues into our method of estimating heading or path now explains human psychophysical data and extends, as well as unifies, previous investigations. Our analysis suggests that various cues available from optic flow help to segment IMOs and, thus, make humans' heading and path perception robust in the presence of such IMOs. PMID:23554589

  3. Efficiencies for parts and wholes in biological-motion perception.

    PubMed

    Bromfield, W Drew; Gold, Jason M

    2017-10-01

    People can reliably infer the actions, intentions, and mental states of fellow humans from body movements (Blake & Shiffrar, 2007). Previous research on such biological-motion perception has suggested that the movements of the feet may play a particularly important role in making certain judgments about locomotion (Chang & Troje, 2009; Troje & Westhoff, 2006). One account of this effect is that the human visual system may have evolved specialized processes that are efficient for extracting information carried by the feet (Troje & Westhoff, 2006). Alternatively, the motion of the feet may simply be more discriminable than that of other parts of the body. To dissociate these two possibilities, we measured people's ability to discriminate the walking direction of stimuli in which individual body parts (feet, hands) were removed or shown in isolation. We then compared human performance to that of a statistically optimal observer (Gold, Tadin, Cook, & Blake, 2008), giving us a measure of humans' discriminative ability independent of the information available (a quantity known as efficiency). We found that efficiency was highest when the hands and the feet were shown in isolation. A series of follow-up experiments suggested that observers were relying on a form-based cue with the isolated hands (specifically, the orientation of their path through space) and a motion-based cue with the isolated feet to achieve such high efficiencies. We relate our findings to previous proposals of a distinction between form-based and motion-based mechanisms in biological-motion perception.

  4. Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

    PubMed

    de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias

    2018-06-01

    To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

  5. Clustering Of Left Ventricular Wall Motion Patterns

    NASA Astrophysics Data System (ADS)

    Bjelogrlic, Z.; Jakopin, J.; Gyergyek, L.

    1982-11-01

    A method for detection of wall regions with similar motion was presented. A model based on local direction information was used to measure the left ventricular wall motion from cineangiographic sequence. Three time functions were used to define segmental motion patterns: distance of a ventricular contour segment from the mean contour, the velocity of a segment and its acceleration. Motion patterns were clustered by the UPGMA algorithm and by an algorithm based on K-nearest neighboor classification rule.

  6. Gait parameter control timing with dynamic manual contact or visual cues.

    PubMed

    Rabin, Ely; Shi, Peter; Werner, William

    2016-06-01

    We investigated the timing of gait parameter changes (stride length, peak toe velocity, and double-, single-support, and complete step duration) to control gait speed. Eleven healthy participants adjusted their gait speed on a treadmill to maintain a constant distance between them and a fore-aft oscillating cue (a place on a conveyor belt surface). The experimental design balanced conditions of cue modality (vision: eyes-open; manual contact: eyes-closed while touching the cue); treadmill speed (0.2, 0.4, 0.85, and 1.3 m/s); and cue motion (none, ±10 cm at 0.09, 0.11, and 0.18 Hz). Correlation analyses revealed a number of temporal relationships between gait parameters and cue speed. The results suggest that neural control ranged from feedforward to feedback. Specifically, step length preceded cue velocity during double-support duration suggesting anticipatory control. Peak toe velocity nearly coincided with its most-correlated cue velocity during single-support duration. The toe-off concluding step and double-support durations followed their most-correlated cue velocity, suggesting feedback control. Cue-tracking accuracy and cue velocity correlations with timing parameters were higher with the manual contact cue than visual cue. The cue/gait timing relationships generalized across cue modalities, albeit with greater delays of step-cycle events relative to manual contact cue velocity. We conclude that individual kinematic parameters of gait are controlled to achieve a desired velocity at different specific times during the gait cycle. The overall timing pattern of instantaneous cue velocities associated with different gait parameters is conserved across cues that afford different performance accuracies. This timing pattern may be temporally shifted to optimize control. Different cue/gait parameter latencies in our nonadaptation paradigm provide general-case evidence of the independent control of gait parameters previously demonstrated in gait adaptation paradigms. Copyright © 2016 the American Physiological Society.

  7. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  8. Rotating columns: Relating structure-from-motion, accretion/deletion, and figure/ground

    PubMed Central

    Froyen, Vicky; Feldman, Jacob; Singh, Manish

    2013-01-01

    We present a novel phenomenon involving an interaction between accretion deletion, figure-ground interpretation, and structure-from-motion. Our displays contain alternating light and dark vertical regions in which random-dot textures moved horizontally at constant speed but in opposite directions in alternating regions. This motion is consistent with all the light regions in front, with the dark regions completing amodally into a single large surface moving in the background, or vice versa. Surprisingly, the regions that are perceived as figural are also perceived as 3-D volumes rotating in depth (like rotating columns)—despite the fact that dot motion is not consistent with 3-D rotation. In a series of experiments, we found we could manipulate which set of regions is perceived as rotating volumes simply by varying known geometric cues to figure ground, including convexity, parallelism, symmetry, and relative area. Subjects indicated which colored regions they perceived as rotating. For our displays we found convexity to be a stronger cue than either symmetry or parallelism. We furthermore found a smooth monotonic decay of the proportion by which subjects perceive symmetric regions as figural, as a function of their relative area. Our results reveal an intriguing new interaction between accretion-deletion, figure-ground, and 3-D motion that is not captured by existing models. They also provide an effective tool for measuring figure-ground perception. PMID:23946432

  9. Rotating columns: relating structure-from-motion, accretion/deletion, and figure/ground.

    PubMed

    Froyen, Vicky; Feldman, Jacob; Singh, Manish

    2013-08-14

    We present a novel phenomenon involving an interaction between accretion deletion, figure-ground interpretation, and structure-from-motion. Our displays contain alternating light and dark vertical regions in which random-dot textures moved horizontally at constant speed but in opposite directions in alternating regions. This motion is consistent with all the light regions in front, with the dark regions completing amodally into a single large surface moving in the background, or vice versa. Surprisingly, the regions that are perceived as figural are also perceived as 3-D volumes rotating in depth (like rotating columns)-despite the fact that dot motion is not consistent with 3-D rotation. In a series of experiments, we found we could manipulate which set of regions is perceived as rotating volumes simply by varying known geometric cues to figure ground, including convexity, parallelism, symmetry, and relative area. Subjects indicated which colored regions they perceived as rotating. For our displays we found convexity to be a stronger cue than either symmetry or parallelism. We furthermore found a smooth monotonic decay of the proportion by which subjects perceive symmetric regions as figural, as a function of their relative area. Our results reveal an intriguing new interaction between accretion-deletion, figure-ground, and 3-D motion that is not captured by existing models. They also provide an effective tool for measuring figure-ground perception.

  10. Experience affects the use of ego-motion signals during 3D shape perception.

    PubMed

    Jain, Anshul; Backus, Benjamin T

    2010-12-29

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.

  11. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle.

    PubMed

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  12. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    NASA Astrophysics Data System (ADS)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  13. Inertial sensor-based smoother for gait analysis.

    PubMed

    Suh, Young Soo

    2014-12-17

    An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).

  14. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  15. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    PubMed Central

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-01-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: 1) the reconstruction algorithms do not make full use of projection statistics; and 2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10 to 40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET. PMID:27385378

  16. Respiratory motion correction in 4D-PET by simultaneous motion estimation and image reconstruction (SMEIR)

    NASA Astrophysics Data System (ADS)

    Kalantari, Faraz; Li, Tianfang; Jin, Mingwu; Wang, Jing

    2016-08-01

    In conventional 4D positron emission tomography (4D-PET), images from different frames are reconstructed individually and aligned by registration methods. Two issues that arise with this approach are as follows: (1) the reconstruction algorithms do not make full use of projection statistics; and (2) the registration between noisy images can result in poor alignment. In this study, we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) methods for motion estimation/correction in 4D-PET. A modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM-TV) was used to obtain a primary motion-compensated PET (pmc-PET) from all projection data, using Demons derived deformation vector fields (DVFs) as initial motion vectors. A motion model update was performed to obtain an optimal set of DVFs in the pmc-PET and other phases, by matching the forward projection of the deformed pmc-PET with measured projections from other phases. The OSEM-TV image reconstruction was repeated using updated DVFs, and new DVFs were estimated based on updated images. A 4D-XCAT phantom with typical FDG biodistribution was generated to evaluate the performance of the SMEIR algorithm in lung and liver tumors with different contrasts and different diameters (10-40 mm). The image quality of the 4D-PET was greatly improved by the SMEIR algorithm. When all projections were used to reconstruct 3D-PET without motion compensation, motion blurring artifacts were present, leading up to 150% tumor size overestimation and significant quantitative errors, including 50% underestimation of tumor contrast and 59% underestimation of tumor uptake. Errors were reduced to less than 10% in most images by using the SMEIR algorithm, showing its potential in motion estimation/correction in 4D-PET.

  17. Understanding Human Motion Skill with Peak Timing Synergy

    NASA Astrophysics Data System (ADS)

    Ueno, Ken; Furukawa, Koichi

    The careful observation of motion phenomena is important in understanding the skillful human motion. However, this is a difficult task due to the complexities in timing when dealing with the skilful control of anatomical structures. To investigate the dexterity of human motion, we decided to concentrate on timing with respect to motion, and we have proposed a method to extract the peak timing synergy from multivariate motion data. The peak timing synergy is defined as a frequent ordered graph with time stamps, which has nodes consisting of turning points in motion waveforms. A proposed algorithm, PRESTO automatically extracts the peak timing synergy. PRESTO comprises the following 3 processes: (1) detecting peak sequences with polygonal approximation; (2) generating peak-event sequences; and (3) finding frequent peak-event sequences using a sequential pattern mining method, generalized sequential patterns (GSP). Here, we measured right arm motion during the task of cello bowing and prepared a data set of the right shoulder and arm motion. We successfully extracted the peak timing synergy on cello bowing data set using the PRESTO algorithm, which consisted of common skills among cellists and personal skill differences. To evaluate the sequential pattern mining algorithm GSP in PRESTO, we compared the peak timing synergy by using GSP algorithm and the one by using filtering by reciprocal voting (FRV) algorithm as a non time-series method. We found that the support is 95 - 100% in GSP, while 83 - 96% in FRV and that the results by GSP are better than the one by FRV in the reproducibility of human motion. Therefore we show that sequential pattern mining approach is more effective to extract the peak timing synergy than non-time series analysis approach.

  18. Developmental changes in the understanding of implied motion in two-dimensional pictures.

    PubMed

    Friedman, S L; Stevenson, M B

    1975-09-01

    The power of various pictorial movement cues in eliciting a reading of movement was studied to determine the relationship between the ease with which a picture is interpreted and the degree to which the picture retains the structure of reality. Movement was indicated in 2 ways: pictorial conventions indicated movement by lines, blurs, and vibration marks; and pictorial postures indicated movement by figures which were isomorphic with the postures involved in real movement. Preschoolers, first graders, sixth graders, and college students were asked to label and sort pictures of human figures as "moving" or "still". Members of the 2 young groups did not classify pictures with conventional cues as "moving" as often as they did pictures with postural cues. Members of the 2 older groups classified both types of pictures as "moving". Since postural cues for movement are recognized at an earlier age than conventional cues, those that are more similar to reality may be easier to understand.

  19. Cellular Scale Anisotropic Topography Guides Schwann Cell Motility

    PubMed Central

    Mitchel, Jennifer A.; Hoffman-Kim, Diane

    2011-01-01

    Directed migration of Schwann cells (SC) is critical for development and repair of the peripheral nervous system. Understanding aspects of motility specific to SC, along with SC response to engineered biomaterials, may inform strategies to enhance nerve regeneration. Rat SC were cultured on laminin-coated microgrooved poly(dimethyl siloxane) platforms that were flat or presented repeating cellular scale anisotropic topographical cues, 30 or 60 µm in width, and observed with timelapse microscopy. SC motion was directed parallel to the long axis of the topography on both the groove floor and the plateau, with accompanying differences in velocity and directional persistence in comparison to SC motion on flat substrates. In addition, feature dimension affected SC morphology, alignment, and directional persistence. Plateaus and groove floors presented distinct cues which promoted differential motility and variable interaction with the topographical features. SC on the plateau surfaces tended to have persistent interactions with the edge topography, while SC on the groove floors tended to have infrequent contact with the corners and walls. Our observations suggest the capacity of SC to be guided without continuous contact with a topographical cue. SC exhibited a range of distinct motile morphologies, characterized by their symmetry and number of extensions. Across all conditions, SC with a single extension traveled significantly faster than cells with more or no extensions. We conclude that SC motility is complex, where persistent motion requires cellular asymmetry, and that anisotropic topography with cellular scale features can direct SC motility. PMID:21949703

  20. The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.

  1. Signal Quality Improvement Algorithms for MEMS Gyroscope-Based Human Motion Analysis Systems: A Systematic Review.

    PubMed

    Du, Jiaying; Gerdtman, Christer; Lindén, Maria

    2018-04-06

    Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.

  2. Signal Quality Improvement Algorithms for MEMS Gyroscope-Based Human Motion Analysis Systems: A Systematic Review

    PubMed Central

    Gerdtman, Christer

    2018-01-01

    Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412

  3. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    PubMed Central

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358

  4. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    PubMed

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.

  5. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 11: Quantification of chest wall motion during deep inspiration breast hold treatments using cine EPID images and a physics based algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpuche Aviles, Jorge E.; VanBeek, Timothy

    Purpose: This work presents an algorithm used to quantify intra-fraction motion for patients treated using deep inspiration breath hold (DIBH). The algorithm quantifies the position of the chest wall in breast tangent fields using electronic portal images. Methods: The algorithm assumes that image profiles, taken along a direction perpendicular to the medial border of the field, follow a monotonically and smooth decreasing function. This assumption is invalid in the presence of lung and can be used to calculate chest wall position. The algorithm was validated by determining the position of the chest wall for varying field edge positions in portalmore » images of a thoracic phantom. The algorithm was used to quantify intra-fraction motion in cine images for 7 patients treated with DIBH. Results: Phantom results show that changes in the distance between chest wall and field edge were accurate within 0.1 mm on average. For a fixed field edge, the algorithm calculates the position of the chest wall with a 0.2 mm standard deviation. Intra-fraction motion for DIBH patients was within 1 mm 91.4% of the time and within 1.5 mm 97.9% of the time. The maximum intra-fraction motion was 3.0 mm. Conclusions: A physics based algorithm was developed and can be used to quantify the position of chest wall irradiated in tangent portal images with an accuracy of 0.1 mm and precision of 0.6 mm. Intra-fraction motion for patients treated with DIBH at our clinic is less than 3 mm.« less

  6. Apollo Rendezvous Docking Simulator

    NASA Image and Video Library

    1964-11-02

    Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. The pilot is shown maneuvering the LEM into position for docking with a full-scale Apollo Command Module. From A.W. Vogeley, Piloted Space-Flight Simulation at Langley Research Center, Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966. The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect. Apollo Rendezvous Docking Simulator: Langley s Rendezvous Docking Simulator was developed by NASA scientists to study the complex task of docking the Lunar Excursion Module with the Command Module in Lunar orbit.

  7. Trajectory prediction for ballistic missiles based on boost-phase LOS measurements

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov

    1997-10-01

    This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.

  8. Effect of micro-stirring on enzymatic reaction kinetics inside a biomimetic container

    NASA Astrophysics Data System (ADS)

    Gozen, Irep; Horowitz, Viva; Chambers, Zachary; Manoharan, Vinothan

    The intracellular environment is dynamic, influenced by the motion of active machinery such as cytoskeleton filaments and molecular motors. To understand whether and how such activity affects the rates of diffusion-limited reactions, we construct a model system consisting of a phospholipid vesicle encapsulating a small number of micro- or nanoparticles, the active motion of which can be induced by chemical or magnetic cues. We aim to determine a relation between active motion of particles and rates of enzymatic reactions in the confined volume. Our findings might illuminate how active motion influences cytoplasmic reaction dynamics, or may have played a role in protocell genetics.

  9. SU-G-JeP1-07: Development of a Programmable Motion Testbed for the Validation of Ultrasound Tracking Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, A; Matrosic, C; Zagzebski, J

    Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulatedmore » motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH grant R01CA190298.« less

  10. 3D-Sonification for Obstacle Avoidance in Brownout Conditions

    NASA Technical Reports Server (NTRS)

    Godfroy-Cooper, M.; Miller, J. D.; Szoboszlay, Z.; Wenzel, E. M.

    2017-01-01

    Helicopter brownout is a phenomenon that occurs when making landing approaches in dusty environments, whereby sand or dust particles become swept up in the rotor outwash. Brownout is characterized by partial or total obscuration of the terrain, which degrades visual cues necessary for hovering and safe landing. Furthermore, the motion of the dust cloud produced during brownout can lead to the pilot experiencing motion cue anomalies such as vection illusions. In this context, the stability and guidance control functions can be intermittently or continuously degraded, potentially leading to undetected surface hazards and obstacles as well as unnoticed drift. Safe and controlled landing in brownout can be achieved using an integrated presentation of LADAR and RADAR imagery and aircraft state symbology. However, though detected by the LADAR and displayed on the sensor image, small obstacles can be difficult to discern from the background so that changes in obstacle elevation may go unnoticed. Moreover, pilot workload associated with tracking the displayed symbology is often so high that the pilot cannot give sufficient attention to the LADAR/RADAR image. This paper documents a simulation evaluating the use of 3D auditory cueing for obstacle avoidance in brownout as a replacement for or compliment to LADAR/RADAR imagery.

  11. The use of a tactile interface to convey position and motion perceptions

    NASA Technical Reports Server (NTRS)

    Rupert, A. H.; Guedry, F. E.; Reschke, M. F.

    1994-01-01

    Under normal terrestrial conditions, perception of position and motion is determined by central nervous system integration of concordant and redundant information from multiple sensory channels (somatosensory, vestibular, visual), which collectively yield vertical perceptions. In the acceleration environment experienced by the pilots, the somatosensory and vestibular sensors frequently present false information concerning the direction of gravity. When presented with conflicting sensory information, it is normal for pilots to experience episodes of disorientation. We have developed a tactile interface that obtains vertical roll and pitch information from a gyro-stabilized attitude indicator and maps this information in a one-to-one correspondence onto the torso of the body using a matrix of vibrotactors. This enables the pilot to continuously maintain an awareness of aircraft attitude without reference to visual cues, utilizing a sensory channel that normally operates at the subconscious level. Although initially developed to improve pilot spatial awareness, this device has obvious applications to 1) simulation and training, 2) nonvisual tracking of targets, which can reduce the need for pilots to make head movements in the high-G environment of aerial combat, and 3) orientation in environments with minimal somatosensory cues (e.g., underwater) or gravitational cues (e.g., space).

  12. Deficits in implicit attention to social signals in schizophrenia and high risk groups: behavioural evidence from a new illusion.

    PubMed

    van 't Wout, Mascha; van Rijn, Sophie; Jellema, Tjeerd; Kahn, René S; Aleman, André

    2009-01-01

    An increasing body of evidence suggests that the apparent social impairments observed in schizophrenia may arise from deficits in social cognitive processing capacities. The ability to process basic social cues, such as gaze direction and biological motion, effortlessly and implicitly is thought to be a prerequisite for establishing successful social interactions and for construing a sense of "social intuition." However, studies that address the ability to effortlessly process basic social cues in schizophrenia are lacking. Because social cognitive processing deficits may be part of the genetic vulnerability for schizophrenia, we also investigated two groups that have been shown to be at increased risk of developing schizophrenia-spectrum pathology: first-degree relatives of schizophrenia patients and men with Klinefelter syndrome (47,XXY). We compared 28 patients with schizophrenia, 29 siblings of patients with schizophrenia, and 29 individuals with Klinefelter syndrome with 46 matched healthy control subjects on a new paradigm. This paradigm measures one's susceptibility for a bias in distance estimation between two agents that is induced by the implicit processing of gaze direction and biological motion conveyed by these agents. Compared to control subjects, patients with schizophrenia, as well as siblings of patients and Klinefelter men, showed a lack of influence of social cues on their distance judgments. We suggest that the insensitivity for social cues is a cognitive aspect of schizophrenia that may be seen as an endophenotype as it appears to be present both in relatives who are at increased genetic risk and in a genetic disorder at risk for schizophrenia-spectrum psychopathology. These social cue-processing deficits could contribute, in part, to the difficulties in higher order social cognitive tasks and, hence, to decreased social competence that has been observed in these groups.

  13. A method to enhance the use of interaural time differences for cochlear implants in reverberant environments

    PubMed Central

    Monaghan, Jessica J. M.; Seeber, Bernhard U.

    2017-01-01

    The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes. Here, an approach to improve access to envelope ITDs for CIs is described in which, rather than attempting to reduce reverberation, the perceptual saliency of cues relating to the source is increased by selectively sharpening peaks in the amplitude envelope judged to contain reliable ITDs. Performance of the algorithm with room reverberation was assessed through simulating listening with bilateral CIs in headphone experiments with NH listeners. Relative to simulated standard CI processing, stimuli processed with the algorithm generated lower ITD discrimination thresholds and increased extents of laterality. Depending on parameterization, intelligibility was unchanged or somewhat reduced. The algorithm has the potential to improve spatial listening with CIs. PMID:27586742

  14. Feasibility of external rhythmic cueing with the Google Glass for improving gait in people with Parkinson's disease.

    PubMed

    Zhao, Yan; Nonnekes, Jorik; Storcken, Erik J M; Janssen, Sabine; van Wegen, Erwin E H; Bloem, Bastiaan R; Dorresteijn, Lucille D A; van Vugt, Jeroen P P; Heida, Tjitske; van Wezel, Richard J A

    2016-06-01

    New mobile technologies like smartglasses can deliver external cues that may improve gait in people with Parkinson's disease in their natural environment. However, the potential of these devices must first be assessed in controlled experiments. Therefore, we evaluated rhythmic visual and auditory cueing in a laboratory setting with a custom-made application for the Google Glass. Twelve participants (mean age = 66.8; mean disease duration = 13.6 years) were tested at end of dose. We compared several key gait parameters (walking speed, cadence, stride length, and stride length variability) and freezing of gait for three types of external cues (metronome, flashing light, and optic flow) and a control condition (no-cue). For all cueing conditions, the subjects completed several walking tasks of varying complexity. Seven inertial sensors attached to the feet, legs and pelvis captured motion data for gait analysis. Two experienced raters scored the presence and severity of freezing of gait using video recordings. User experience was evaluated through a semi-open interview. During cueing, a more stable gait pattern emerged, particularly on complicated walking courses; however, freezing of gait did not significantly decrease. The metronome was more effective than rhythmic visual cues and most preferred by the participants. Participants were overall positive about the usability of the Google Glass and willing to use it at home. Thus, smartglasses like the Google Glass could be used to provide personalized mobile cueing to support gait; however, in its current form, auditory cues seemed more effective than rhythmic visual cues.

  15. Circadian-Time Sickness: Time-of-Day Cue-Conflicts Directly Affect Health.

    PubMed

    van Ee, Raymond; Van de Cruys, Sander; Schlangen, Luc J M; Vlaskamp, Björn N S

    2016-11-01

    A daily rhythm that is not in synchrony with the environmental light-dark cycle (as in jetlag and shift work) is known to affect mood and health through an as yet unresolved neural mechanism. Here, we combine Bayesian probabilistic 'cue-conflict' theory with known physiology of the biological clock of the brain, entailing the insight that, for a functional pacemaker, it is sufficient to have two interacting units (reflecting environmental and internal time-of-day cues), without the need for an extra homuncular directing unit. Unnatural light-dark cycles cause a time-of-day cue-conflict that is reflected by a desynchronization between the ventral (environmental) and dorsal (internal) pacemaking signals of the pacemaker. We argue that this desynchronization, in-and-of-itself, produces health issues that we designate as 'circadian-time sickness', analogous to 'motion sickness'. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    NASA Astrophysics Data System (ADS)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.

  17. Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning

    PubMed Central

    Weisswange, Thomas H.; Rothkopf, Constantin A.; Rodemann, Tobias; Triesch, Jochen

    2011-01-01

    Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference. PMID:21750717

  18. An Improved Perturb and Observe Algorithm for Photovoltaic Motion Carriers

    NASA Astrophysics Data System (ADS)

    Peng, Lele; Xu, Wei; Li, Liming; Zheng, Shubin

    2018-03-01

    An improved perturbation and observation algorithm for photovoltaic motion carriers is proposed in this paper. The model of the proposed algorithm is given by using Lambert W function and tangent error method. Moreover, by using matlab and experiment of photovoltaic system, the tracking performance of the proposed algorithm is tested. And the results demonstrate that the improved algorithm has fast tracking speed and high efficiency. Furthermore, the energy conversion efficiency by the improved method has increased by nearly 8.2%.

  19. Role of orientation reference selection in motion sickness

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Black, F. Owen

    1987-01-01

    The objectives of this proposal were developed to further explore and quantify the orientation reference selection abilities of subjects and the relation, if any, between motion sickness and orientation reference selection. The overall objectives of this proposal are to determine (1) if motion sickness susceptibility is related to sensory orientation reference selection abilities of subjects, (2) if abnormal vertical canal-otolith function is the source of these abnormal posture control strategies and if it can be quantified by vestibular and oculomotor reflex measurements, and (3) if quantifiable measures of perception of vestibular and visual motion cues can be related to motion sickness susceptibility and to orientation reference selection ability demonstrated by tests which systematically control the sensory imformation available for orientation.

  20. Human functional magnetic resonance imaging reveals separation and integration of shape and motion cues in biological motion processing.

    PubMed

    Jastorff, Jan; Orban, Guy A

    2009-06-03

    In a series of human functional magnetic resonance imaging experiments, we systematically manipulated point-light stimuli to identify the contributions of the various areas implicated in biological motion processing (for review, see Giese and Poggio, 2003). The first experiment consisted of a 2 x 2 factorial design with global shape and kinematics as factors. In two additional experiments, we investigated the contributions of local opponent motion, the complexity of the portrayed movement and a one-back task to the activation pattern. Experiment 1 revealed a clear separation between shape and motion processing, resulting in two branches of activation. A ventral region, extending from the lateral occipital sulcus to the posterior inferior temporal gyrus, showed a main effect of shape and its extension into the fusiform gyrus also an interaction. The dorsal region, including the posterior inferior temporal sulcus and the posterior superior temporal sulcus (pSTS), showed a main effect of kinematics together with an interaction. Region of interest analysis identified these interaction sites as the extrastriate and fusiform body areas (EBA and FBA). The local opponent motion cue yielded only little activation, limited to the ventral region (experiment 3). Our results suggest that the EBA and the FBA correspond to the initial stages in visual action analysis, in which the performed action is linked to the body of the actor. Moreover, experiment 2 indicates that the body areas are activated automatically even in the absence of a task, whereas other cortical areas like pSTS or frontal regions depend on the complexity of movements or task instructions for their activation.

  1. Adapted cuing technique: facilitating sequential phoneme production.

    PubMed

    Klick, S L

    1994-09-01

    ACT is a visual cuing technique designed to facilitate dyspraxic speech by highlighting the sequential production of phonemes. In using ACT, cues are presented in such a way as to suggest sequential, coarticulatory movement in an overall pattern of motion. While using ACT, the facilitator's hand moves forward and back along the side of her (or his) own face. Finger movements signal specific speech sounds in formations loosely based on the manual alphabet for the hearing impaired. The best movements suggest the flowing, interactive nature of coarticulated phonemes. The synergistic nature of speech is suggested by coordinated hand motions which tighten and relax, move quickly or slowly, reflecting the motions of the vocal tract at various points during production of phonemic sequences. General principles involved in using ACT include a primary focus on speech-in-motion, the monitoring and fading of cues, and the presentation of stimuli based on motor-task analysis of phonemic sequences. Phonemic sequences are cued along three dimensions: place, manner, and vowel-related mandibular motion. Cuing vowels is a central feature of ACT. Two parameters of vowel production, focal point of resonance and mandibular closure, are cued. The facilitator's hand motions reflect the changing shape of the vocal tract and the trajectory of the tongue that result from the coarticulation of vowels and consonants. Rigid presentation of the phonemes is secondary to the facilitator's primary focus on presenting the overall sequential movement. The facilitator's goal is to self-tailor ACT in response to the changing needs and abilities of the client.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.

  3. 4D Cone-beam CT reconstruction using a motion model based on principal component analysis

    PubMed Central

    Staub, David; Docef, Alen; Brock, Robert S.; Vaman, Constantin; Murphy, Martin J.

    2011-01-01

    Purpose: To provide a proof of concept validation of a novel 4D cone-beam CT (4DCBCT) reconstruction algorithm and to determine the best methods to train and optimize the algorithm. Methods: The algorithm animates a patient fan-beam CT (FBCT) with a patient specific parametric motion model in order to generate a time series of deformed CTs (the reconstructed 4DCBCT) that track the motion of the patient anatomy on a voxel by voxel scale. The motion model is constrained by requiring that projections cast through the deformed CT time series match the projections of the raw patient 4DCBCT. The motion model uses a basis of eigenvectors that are generated via principal component analysis (PCA) of a training set of displacement vector fields (DVFs) that approximate patient motion. The eigenvectors are weighted by a parameterized function of the patient breathing trace recorded during 4DCBCT. The algorithm is demonstrated and tested via numerical simulation. Results: The algorithm is shown to produce accurate reconstruction results for the most complicated simulated motion, in which voxels move with a pseudo-periodic pattern and relative phase shifts exist between voxels. The tests show that principal component eigenvectors trained on DVFs from a novel 2D/3D registration method give substantially better results than eigenvectors trained on DVFs obtained by conventionally registering 4DCBCT phases reconstructed via filtered backprojection. Conclusions: Proof of concept testing has validated the 4DCBCT reconstruction approach for the types of simulated data considered. In addition, the authors found the 2D/3D registration approach to be our best choice for generating the DVF training set, and the Nelder-Mead simplex algorithm the most robust optimization routine. PMID:22149852

  4. Motion compensation for ultra wide band SAR

    NASA Technical Reports Server (NTRS)

    Madsen, S.

    2001-01-01

    This paper describes an algorithm that combines wavenumber domain processing with a procedure that enables motion compensation to be applied as a function of target range and azimuth angle. First, data are processed with nominal motion compensation applied, partially focusing the image, then the motion compensation of individual subpatches is refined. The results show that the proposed algorithm is effective in compensating for deviations from a straight flight path, from both a performance and a computational efficiency point of view.

  5. Effects of spatial cues on color-change detection in humans

    PubMed Central

    Herman, James P.; Bogadhi, Amarender R.; Krauzlis, Richard J.

    2015-01-01

    Studies of covert spatial attention have largely used motion, orientation, and contrast stimuli as these features are fundamental components of vision. The feature dimension of color is also fundamental to visual perception, particularly for catarrhine primates, and yet very little is known about the effects of spatial attention on color perception. Here we present results using novel dynamic color stimuli in both discrimination and color-change detection tasks. We find that our stimuli yield comparable discrimination thresholds to those obtained with static stimuli. Further, we find that an informative spatial cue improves performance and speeds response time in a color-change detection task compared with an uncued condition, similar to what has been demonstrated for motion, orientation, and contrast stimuli. Our results demonstrate the use of dynamic color stimuli for an established psychophysical task and show that color stimuli are well suited to the study of spatial attention. PMID:26047359

  6. Electrophysiological correlates of purely temporal figure-ground segregation.

    PubMed

    Kandil, Farid I; Fahle, Manfred

    2003-11-01

    Inhomogenous displays, in contrast to homogenous ones, evoke a specific potential in the VEP (tsVEP) which appears across different classical visual stimulus dimensions defining figure-ground segregation, such as luminance, orientation, (first-order) motion, and stereoscopic depth. This negative potential has a peak latency of about 200-300 ms and a peak amplitude of about -3 to -10 microV [Doc Ophthalmol. 95 (1998) 335]. Previously, we demonstrated that human subjects reliably segregate figure from ground, even in the absence of the classical cues, leaving time of change as the only cue for segregation. The results of the present study demonstrate that also purely temporally defined checkerboards evoke a tsVEP resembling the motion-defined tsVEP regarding polarity (negative), latency (two peaks at 180 and 270 ms, respectively), amplitude of the first negativity (-5.6 microV), and overall form of its components.

  7. Richardson-Lucy deblurring for the star scene under a thinning motion path

    NASA Astrophysics Data System (ADS)

    Su, Laili; Shao, Xiaopeng; Wang, Lin; Wang, Haixin; Huang, Yining

    2015-05-01

    This paper puts emphasis on how to model and correct image blur that arises from a camera's ego motion while observing a distant star scene. Concerning the significance of accurate estimation of point spread function (PSF), a new method is employed to obtain blur kernel by thinning star motion path. In particular, how the blurred star image can be corrected to reconstruct the clear scene with a thinning motion blur model which describes the camera's path is presented. This thinning motion path to build blur kernel model is more effective at modeling the spatially motion blur introduced by camera's ego motion than conventional blind estimation of kernel-based PSF parameterization. To gain the reconstructed image, firstly, an improved thinning algorithm is used to obtain the star point trajectory, so as to extract the blur kernel of the motion-blurred star image. Then how motion blur model can be incorporated into the Richardson-Lucy (RL) deblurring algorithm, which reveals its overall effectiveness, is detailed. In addition, compared with the conventional estimated blur kernel, experimental results show that the proposed method of using thinning algorithm to get the motion blur kernel is of less complexity, higher efficiency and better accuracy, which contributes to better restoration of the motion-blurred star images.

  8. GENERAL: Application of Symplectic Algebraic Dynamics Algorithm to Circular Restricted Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Lu, Wei-Tao; Zhang, Hua; Wang, Shun-Jin

    2008-07-01

    Symplectic algebraic dynamics algorithm (SADA) for ordinary differential equations is applied to solve numerically the circular restricted three-body problem (CR3BP) in dynamical astronomy for both stable motion and chaotic motion. The result is compared with those of Runge-Kutta algorithm and symplectic algorithm under the fourth order, which shows that SADA has higher accuracy than the others in the long-term calculations of the CR3BP.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zawisza, I; Yan, H; Yin, F

    Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less

  10. Experience affects the use of ego-motion signals during 3D shape perception

    PubMed Central

    Jain, Anshul; Backus, Benjamin T.

    2011-01-01

    Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the “stationarity prior,” is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers’ stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity. PMID:21191132

  11. Habituation to novel visual vestibular environments with special reference to space flight

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Kenyon, R. V.; Oman, C. M.

    1981-01-01

    The etiology of space motion sickness and the underlying physiological mechanisms associated with spatial orientation in a space environment were investigated. Human psychophysical experiments were used as the basis for the research concerning the interaction of visual and vestibular cues in the development of motion sickness. Particular emphasis is placed on the conflict theory in terms of explaining these interactions. Research on the plasticity of the vestibulo-ocular reflex is discussed.

  12. An efficient motion-resistant method for wearable pulse oximeter.

    PubMed

    Yan, Yong-Sheng; Zhang, Yuan-Ting

    2008-05-01

    Reduction of motion artifact and power saving are crucial in designing a wearable pulse oximeter for long-term telemedicine application. In this paper, a novel algorithm, minimum correlation discrete saturation transform (MCDST) has been developed for the estimation of arterial oxygen saturation (SaO2), based on an optical model derived from photon diffusion analysis. The simulation shows that the new algorithm MCDST is more robust under low SNRs than the clinically verified motion-resistant algorithm discrete saturation transform (DST). Further, the experiment with different severity of motions demonstrates that MCDST has a slightly better performance than DST algorithm. Moreover, MCDST is more computationally efficient than DST because the former uses linear algebra instead of the time-consuming adaptive filter used by latter, which indicates that MCDST can reduce the required power consumption and circuit complexity of the implementation. This is vital for wearable devices, where the physical size and long battery life are crucial.

  13. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  14. How people learn about causal influence when there are many possible causes: A model based on informative transitions.

    PubMed

    Derringer, Cory; Rottman, Benjamin Margolin

    2018-05-01

    Four experiments tested how people learn cause-effect relations when there are many possible causes of an effect. When there are many cues, even if all the cues together strongly predict the effect, the bivariate relation between each individual cue and the effect can be weak, which can make it difficult to detect the influence of each cue. We hypothesized that when detecting the influence of a cue, in addition to learning from the states of the cues and effect (e.g., a cue is present and the effect is present), which is hypothesized by multiple existing theories of learning, participants would also learn from transitions - how the cues and effect change over time (e.g., a cue turns on and the effect turns on). We found that participants were better able to identify positive and negative cues in an environment in which only one cue changed from one trial to the next, compared to multiple cues changing (Experiments 1A, 1B). Within a single learning sequence, participants were also more likely to update their beliefs about causal strength when one cue changed at a time ('one-change transitions') than when multiple cues changed simultaneously (Experiment 2). Furthermore, learning was impaired when the trials were grouped by the state of the effect (Experiment 3) or when the trials were grouped by the state of a cue (Experiment 4), both of which reduce the number of one-change transitions. We developed a modification of the Rescorla-Wagner algorithm to model this 'Informative Transitions' learning processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.

  16. A visual horizon affects steering responses during flight in fruit flies.

    PubMed

    Caballero, Jorge; Mazo, Chantell; Rodriguez-Pinto, Ivan; Theobald, Jamie C

    2015-09-01

    To navigate well through three-dimensional environments, animals must in some way gauge the distances to objects and features around them. Humans use a variety of visual cues to do this, but insects, with their small size and rigid eyes, are constrained to a more limited range of possible depth cues. For example, insects attend to relative image motion when they move, but cannot change the optical power of their eyes to estimate distance. On clear days, the horizon is one of the most salient visual features in nature, offering clues about orientation, altitude and, for humans, distance to objects. We set out to determine whether flying fruit flies treat moving features as farther off when they are near the horizon. Tethered flies respond strongly to moving images they perceive as close. We measured the strength of steering responses while independently varying the elevation of moving stimuli and the elevation of a virtual horizon. We found responses to vertical bars are increased by negative elevations of their bases relative to the horizon, closely correlated with the inverse of apparent distance. In other words, a bar that dips far below the horizon elicits a strong response, consistent with using the horizon as a depth cue. Wide-field motion also had an enhanced effect below the horizon, but this was only prevalent when flies were additionally motivated with hunger. These responses may help flies tune behaviors to nearby objects and features when they are too far off for motion parallax. © 2015. Published by The Company of Biologists Ltd.

  17. A tactile display for international space station (ISS) extravehicular activity (EVA).

    PubMed

    Rochlis, J L; Newman, D J

    2000-06-01

    A tactile display to increase an astronaut's situational awareness during an extravehicular activity (EVA) has been developed and ground tested. The Tactor Locator System (TLS) is a non-intrusive, intuitive display capable of conveying position and velocity information via a vibrotactile stimulus applied to the subject's neck and torso. In the Earth's 1 G environment, perception of position and velocity is determined by the body's individual sensory systems. Under normal sensory conditions, redundant information from these sensory systems provides humans with an accurate sense of their position and motion. However, altered environments, including exposure to weightlessness, can lead to conflicting visual and vestibular cues, resulting in decreased situational awareness. The TLS was designed to provide somatosensory cues to complement the visual system during EVA operations. An EVA task was simulated on a computer graphics workstation with a display of the International Space Station (ISS) and a target astronaut at an unknown location. Subjects were required to move about the ISS and acquire the target astronaut using either an auditory cue at the outset, or the TLS. Subjects used a 6 degree of freedom input device to command translational and rotational motion. The TLS was configured to act as a position aid, providing target direction information to the subject through a localized stimulus. Results show that the TLS decreases reaction time (p = 0.001) and movement time (p = 0.001) for simulated subject (astronaut) motion around the ISS. The TLS is a useful aid in increasing an astronaut's situational awareness, and warrants further testing to explore other uses, tasks and configurations.

  18. Visual cueing aids for rotorcraft landings

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W.; Andre, Anthony D.

    1993-01-01

    The present study used a rotorcraft simulator to examine descents-to-hover at landing pads with one of three approach lighting configurations. The impact of simulator platform motion upon descents to hover was also examined. The results showed that the configuration with the most useful optical information led to the slowest final approach speeds, and that pilots found this configuration, together with the presence of simulator platform motion, most desirable. The results also showed that platform motion led to higher rates of approach to the landing pad in some cases. Implications of the results for the design of vertiport approach paths are discussed.

  19. A Real-Time Position-Locating Algorithm for CCD-Based Sunspot Tracking

    NASA Technical Reports Server (NTRS)

    Taylor, Jaime R.

    1996-01-01

    NASA Marshall Space Flight Center's (MSFC) EXperimental Vector Magnetograph (EXVM) polarimeter measures the sun's vector magnetic field. These measurements are taken to improve understanding of the sun's magnetic field in the hopes to better predict solar flares. Part of the procedure for the EXVM requires image motion stabilization over a period of a few minutes. A high speed tracker can be used to reduce image motion produced by wind loading on the EXVM, fluctuations in the atmosphere and other vibrations. The tracker consists of two elements, an image motion detector and a control system. The image motion detector determines the image movement from one frame to the next and sends an error signal to the control system. For the ground based application to reduce image motion due to atmospheric fluctuations requires an error determination at the rate of at least 100 hz. It would be desirable to have an error determination rate of 1 kHz to assure that higher rate image motion is reduced and to increase the control system stability. Two algorithms are presented that are typically used for tracking. These algorithms are examined for their applicability for tracking sunspots, specifically their accuracy if only one column and one row of CCD pixels are used. To examine the accuracy of this method two techniques are used. One involves moving a sunspot image a known distance with computer software, then applying the particular algorithm to see how accurately it determines this movement. The second technique involves using a rate table to control the object motion, then applying the algorithms to see how accurately each determines the actual motion. Results from these two techniques are presented.

  20. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.

    PubMed

    Zhang, Man; Wang, Guanyong; Zhang, Lei

    2017-10-26

    Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  1. Motion planning in velocity affine mechanical systems

    NASA Astrophysics Data System (ADS)

    Jakubiak, Janusz; Tchoń, Krzysztof; Magiera, Władysław

    2010-09-01

    We address the motion planning problem in specific mechanical systems whose linear and angular velocities depend affinely on control. The configuration space of these systems encompasses the rotation group, and the motion planning involves the system orientation. Derivation of the motion planning algorithm for velocity affine systems has been inspired by the continuation method. Performance of this algorithm is illustrated with examples of the kinematics of a serial nonholonomic manipulator, the plate-ball kinematics and the attitude control of a rigid body.

  2. Thresholds for the perception of whole-body linear sinusoidal motion in the horizontal plane

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Young, Laurence R.; Steele, Charles R.; Schubert, Earl D.

    1989-01-01

    An improved linear sled has been developed to provide precise motion stimuli without generating perceptible extraneous motion cues (a noiseless environment). A modified adaptive forced-choice method was employed to determine perceptual thresholds to whole-body linear sinusoidal motion in 25 subjects. Thresholds for the detection of movement in the horizontal plane were found to be lower than those reported previously. At frequencies of 0.2 to 0.5 Hz, thresholds were shown to be independent of frequency, while at frequencies of 1.0 to 3.0 Hz, thresholds showed a decreasing sensitivity with increasing frequency, indicating that the perceptual process is not sensitive to the rate change of acceleration of the motion stimulus. The results suggest that the perception of motion behaves as an integrating accelerometer with a bandwidth of at least 3 Hz.

  3. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  4. Driver Distraction Using Visual-Based Sensors and Algorithms.

    PubMed

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-10-28

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  5. Driver Distraction Using Visual-Based Sensors and Algorithms

    PubMed Central

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-01-01

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822

  6. Coherence of structural visual cues and pictorial gravity paves the way for interceptive actions.

    PubMed

    Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco

    2011-09-20

    Dealing with upside-down objects is difficult and takes time. Among the cues that are critical for defining object orientation, the visible influence of gravity on the object's motion has received limited attention. Here, we manipulated the alignment of visible gravity and structural visual cues between each other and relative to the orientation of the observer and physical gravity. Participants pressed a button triggering a hitter to intercept a target accelerated by a virtual gravity. A factorial design assessed the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). We found that interception was significantly more successful when scene direction was concordant with target gravity direction, irrespective of whether both were upright or inverted. This was so independent of the hitter type and when performance feedback to the participants was either available (Experiment 1) or unavailable (Experiment 2). These results show that the combined influence of visible gravity and structural visual cues can outweigh both physical gravity and viewer-centered cues, leading to rely instead on the congruence of the apparent physical forces acting on people and objects in the scene.

  7. Not letting the left leg know what the right leg is doing: limb-specific locomotor adaptation to sensory-cue conflict.

    PubMed

    Durgin, Frank H; Fox, Laura F; Hoon Kim, Dong

    2003-11-01

    We investigated the phenomenon of limb-specific locomotor adaptation in order to adjudicate between sensory-cue-conflict theory and motor-adaptation theory. The results were consistent with cue-conflict theory in demonstrating that two different leg-specific hopping aftereffects are modulated by the presence of conflicting estimates of self-motion from visual and nonvisual sources. Experiment 1 shows that leg-specific increases in forward drift during attempts to hop in place on one leg while blindfolded vary according to the relationship between visual information and motor activity during an adaptation to outdoor forward hopping. Experiment 2 shows that leg-specific changes in performance on a blindfolded hopping-to-target task are similarly modulated by the presence of cue conflict during adaptation to hopping on a treadmill. Experiment 3 shows that leg-specific aftereffects from hopping additionally produce inadvertent turning during running in place while blindfolded. The results of these experiments suggest that these leg-specific locomotor aftereffects are produced by sensory-cue conflict rather than simple motor adaptation.

  8. A General, Adaptive, Roadmap-Based Algorithm for Protein Motion Computation.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2016-03-01

    Precious information on protein function can be extracted from a detailed characterization of protein equilibrium dynamics. This remains elusive in wet and dry laboratories, as function-modulating transitions of a protein between functionally-relevant, thermodynamically-stable and meta-stable structural states often span disparate time scales. In this paper we propose a novel, robotics-inspired algorithm that circumvents time-scale challenges by drawing analogies between protein motion and robot motion. The algorithm adapts the popular roadmap-based framework in robot motion computation to handle the more complex protein conformation space and its underlying rugged energy surface. Given known structures representing stable and meta-stable states of a protein, the algorithm yields a time- and energy-prioritized list of transition paths between the structures, with each path represented as a series of conformations. The algorithm balances computational resources between a global search aimed at obtaining a global view of the network of protein conformations and their connectivity and a detailed local search focused on realizing such connections with physically-realistic models. Promising results are presented on a variety of proteins that demonstrate the general utility of the algorithm and its capability to improve the state of the art without employing system-specific insight.

  9. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  10. Caudate clues to rewarding cues.

    PubMed

    Platt, Michael L

    2002-01-31

    Behavioral studies indicate that prior experience can influence discrimination of subsequent stimuli. The mechanisms responsible for highlighting a particular aspect of the stimulus, such as motion or color, as most relevant and thus deserving further scrutiny, however, remain poorly understood. In the current issue of Neuron, demonstrate that neurons in the caudate nucleus of the basal ganglia signal which dimension of a visual cue, either color or location, is associated with reward in an eye movement task. These findings raise the possibility that this structure participates in the reward-based control of visual attention.

  11. Motion-compensated cone beam computed tomography using a conjugate gradient least-squares algorithm and electrical impedance tomography imaging motion data.

    PubMed

    Pengpen, T; Soleimani, M

    2015-06-13

    Cone beam computed tomography (CBCT) is an imaging modality that has been used in image-guided radiation therapy (IGRT). For applications such as lung radiation therapy, CBCT images are greatly affected by the motion artefacts. This is mainly due to low temporal resolution of CBCT. Recently, a dual modality of electrical impedance tomography (EIT) and CBCT has been proposed, in which the high temporal resolution EIT imaging system provides motion data to a motion-compensated algebraic reconstruction technique (ART)-based CBCT reconstruction software. High computational time associated with ART and indeed other variations of ART make it less practical for real applications. This paper develops a motion-compensated conjugate gradient least-squares (CGLS) algorithm for CBCT. A motion-compensated CGLS offers several advantages over ART-based methods, including possibilities for explicit regularization, rapid convergence and parallel computations. This paper for the first time demonstrates motion-compensated CBCT reconstruction using CGLS and reconstruction results are shown in limited data CBCT considering only a quarter of the full dataset. The proposed algorithm is tested using simulated motion data in generic motion-compensated CBCT as well as measured EIT data in dual EIT-CBCT imaging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  12. Effects of Vibrotactile Feedback on Human Learning of Arm Motions

    PubMed Central

    Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.

    2015-01-01

    Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644

  13. Detection of radial motion depends on spatial displacement.

    PubMed

    de la Malla, Cristina; López-Moliner, Joan

    2010-06-01

    Nakayama and Tyler (1981) disentangled the use of pure motion (speed) information from spatial displacement information for the detection of lateral motion. They showed that when positional cues were removed the contribution of motion or spatial information was dependent on the temporal frequency: for temporal frequencies lower than 1Hz the mechanism used to detect motion relied on speed information while for higher temporal frequencies a mechanism based on displacement information was used. Here we test whether the same dependency is also revealed in radial motion. In order to do so, we adapted the paradigm previously used by Nakayama and Tyler to obtain detection thresholds for lateral and radial motion by using a 2-IFC procedure. Subjects had to report which of the intervals contained the signal stimulus (33% coherent motion). We replicated the temporal frequency dependency for lateral motion but results indicate, however, that the detection of radial is always consistent with detecting a spatial displacement amplitude. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  14. Neurons compute internal models of the physical laws of motion.

    PubMed

    Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David

    2004-07-29

    A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.

  15. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  16. The Contribution of Head Movement to the Externalization and Internalization of Sounds

    PubMed Central

    Brimijoin, W. Owen; Boyd, Alan W.; Akeroyd, Michael A.

    2013-01-01

    Background When stimuli are presented over headphones, they are typically perceived as internalized; i.e., they appear to emanate from inside the head. Sounds presented in the free-field tend to be externalized, i.e., perceived to be emanating from a source in the world. This phenomenon is frequently attributed to reverberation and to the spectral characteristics of the sounds: those sounds whose spectrum and reverberation matches that of free-field signals arriving at the ear canal tend to be more frequently externalized. Another factor, however, is that the virtual location of signals presented over headphones moves in perfect concert with any movements of the head, whereas the location of free-field signals moves in opposition to head movements. The effects of head movement have not been systematically disentangled from reverberation and/or spectral cues, so we measured the degree to which movements contribute to externalization. Methodology/Principal Findings We performed two experiments: 1) Using motion tracking and free-field loudspeaker presentation, we presented signals that moved in their spatial location to match listeners’ head movements. 2) Using motion tracking and binaural room impulse responses, we presented filtered signals over headphones that appeared to remain static relative to the world. The results from experiment 1 showed that free-field signals from the front that move with the head are less likely to be externalized (23%) than those that remain fixed (63%). Experiment 2 showed that virtual signals whose position was fixed relative to the world are more likely to be externalized (65%) than those fixed relative to the head (20%), regardless of the fidelity of the individual impulse responses. Conclusions/Significance Head movements play a significant role in the externalization of sound sources. These findings imply tight integration between binaural cues and self motion cues and underscore the importance of self motion for spatial auditory perception. PMID:24312677

  17. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  18. Measuring the self-similarity exponent in Lévy stable processes of financial time series

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.

    2013-11-01

    Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.

  19. Optic flow cues guide flight in birds.

    PubMed

    Bhagavatula, Partha S; Claudianos, Charles; Ibbotson, Michael R; Srinivasan, Mandyam V

    2011-11-08

    Although considerable effort has been devoted to investigating how birds migrate over large distances, surprisingly little is known about how they tackle so successfully the moment-to-moment challenges of rapid flight through cluttered environments [1]. It has been suggested that birds detect and avoid obstacles [2] and control landing maneuvers [3-5] by using cues derived from the image motion that is generated in the eyes during flight. Here we investigate the ability of budgerigars to fly through narrow passages in a collision-free manner, by filming their trajectories during flight in a corridor where the walls are decorated with various visual patterns. The results demonstrate, unequivocally and for the first time, that birds negotiate narrow gaps safely by balancing the speeds of image motion that are experienced by the two eyes and that the speed of flight is regulated by monitoring the speed of image motion that is experienced by the two eyes. These findings have close parallels with those previously reported for flying insects [6-13], suggesting that some principles of visual guidance may be shared by all diurnal, flying animals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. A review of flight simulation techniques

    NASA Astrophysics Data System (ADS)

    Baarspul, Max

    After a brief historical review of the evolution of flight simulation techniques, this paper first deals with the main areas of flight simulator applications. Next, it describes the main components of a piloted flight simulator. Because of the presence of the pilot-in-the-loop, the digital computer driving the simulator must solve the aircraft equations of motion in ‘real-time’. Solutions to meet the high required computer power of todays modern flight simulator are elaborated. The physical similarity between aircraft and simulator in cockpit layout, flight instruments, flying controls etc., is discussed, based on the equipment and environmental cue fidelity required for training and research simulators. Visual systems play an increasingly important role in piloted flight simulation. The visual systems now available and most widely used are described, where image generators and display devices will be distinguished. The characteristics of out-of-the-window visual simulation systems pertaining to the perceptual capabilities of human vision are discussed. Faithful reproduction of aircraft motion requires large travel, velocity and acceleration capabilities of the motion system. Different types and applications of motion systems in e.g. airline training and research are described. The principles of motion cue generation, based on the characteristics of the non-visual human motion sensors, are described. The complete motion system, consisting of the hardware and the motion drive software, is discussed. The principles of mathematical modelling of the aerodynamic, flight control, propulsion, landing gear and environmental characteristics of the aircraft are reviewed. An example of the identification of an aircraft mathematical model, based on flight and taxi tests, is presented. Finally, the paper deals with the hardware and software integration of the flight simulator components and the testing and acceptance of the complete flight simulator. Examples of the so-called ‘Computer Generated Checkout’ and ‘Proof of Match’ are presented. The concluding remarks briefly summarize the status of flight simulator technology and consider possibilities for future research.

  1. Visual information for judging temporal range

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Mowafy, Lyn

    1993-01-01

    Work in our laboratory suggests that pilots can extract temporal range information (i.e., the time to pass a given waypoint) directly from out-the-window motion information. This extraction does not require the use of velocity or distance, but rather operates solely on a 2-D motion cue. In this paper, we present the mathematical derivation of this information, psychophysical evidence of human observers' sensitivity, and possible advantages and limitations of basing vehicle control on this parameter.

  2. Speed and direction changes induce the perception of animacy in 7-month-old infants

    PubMed Central

    Träuble, Birgit; Pauen, Sabina; Poulin-Dubois, Diane

    2014-01-01

    A large body of research has documented infants’ ability to classify animate and inanimate objects based on static or dynamic information. It has been shown that infants less than 1 year of age transfer animacy-specific expectations from dynamic point-light displays to static images. The present study examined whether basic motion cues that typically trigger judgments of perceptual animacy in older children and adults lead 7-month-olds to infer an ambiguous object’s identity from dynamic information. Infants were tested with a novel paradigm that required inferring the animacy status of an ambiguous moving shape. An ambiguous shape emerged from behind a screen and its identity could only be inferred from its motion. Its motion pattern varied distinctively between scenes: it either changed speed and direction in an animate way, or it moved along a straight path at a constant speed (i.e., in an inanimate way). At test, the identity of the shape was revealed and it was either consistent or inconsistent with its motion pattern. Infants looked longer on trials with the inconsistent outcome. We conclude that 7-month-olds’ representations of animates and inanimates include category-specific associations between static and dynamic attributes. Moreover, these associations seem to hold for simple dynamic cues that are considered minimal conditions for animacy perception. PMID:25346712

  3. Analysis of facial motion patterns during speech using a matrix factorization algorithm

    PubMed Central

    Lucero, Jorge C.; Munhall, Kevin G.

    2008-01-01

    This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject’s face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic “eigenregions” whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records. PMID:19062866

  4. Proceedings, 13th Annual Conference on Manual Control

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Theoretical aspects of manual control theory are discussed. Specific topics covered include: tracking; performance, attention allocation, and mental load; surface vehicle control; monitoring behavior and supervisory control; manipulators and prosthetics; aerospace vehicle control; motion and visual cues; and displays and controls.

  5. Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances.

    PubMed

    Schuster, Stefan; Strauss, Roland; Götz, Karl G

    2002-09-17

    Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.

  6. Con-Text: Text Detection for Fine-grained Object Classification.

    PubMed

    Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo

    2017-05-24

    This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).

  7. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  8. The effect of contextual cues on the encoding of motor memories.

    PubMed

    Howard, Ian S; Wolpert, Daniel M; Franklin, David W

    2013-05-01

    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.

  9. Eyes only? Perceiving eye contact is neither sufficient nor necessary for attentional capture by face direction.

    PubMed

    Böckler, Anne; van der Wel, Robrecht P R D; Welsh, Timothy N

    2015-09-01

    Direct eye contact and motion onset both constitute powerful cues that capture attention. Recent research suggests that (social) gaze and (non-social) motion onset influence information processing in parallel, even when combined as sudden onset direct gaze cues (i.e., faces suddenly establishing eye contact). The present study investigated the role of eye visibility for attention capture by these sudden onset face cues. To this end, face direction was manipulated (away or towards onlooker) while faces had closed eyes (eliminating visibility of eyes, Experiment 1), wore sunglasses (eliminating visible eyes, but allowing for the expectation of eyes to be open, Experiment 2), and were inverted with visible eyes (disrupting the integration of eyes and faces, Experiment 3). Participants classified targets appearing on one of four faces. Initially, two faces were oriented towards participants and two faces were oriented away from participants. Simultaneous to target presentation, one averted face became directed and one directed face became averted. Attention capture by face direction (i.e., facilitation for faces directed towards participants) was absent when eyes were closed, but present when faces wore sunglasses. Sudden onset direct faces can, hence, induce attentional capture, even when lacking eye cues. Inverted faces, by contrast, did not elicit attentional capture. Thus, when eyes cannot be integrated into a holistic face representation they are not sufficient to capture attention. Overall, the results suggest that visibility of eyes is neither necessary nor sufficient for the sudden direct face effect. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Numerical simulation of human orientation perception during lunar landing

    NASA Astrophysics Data System (ADS)

    Clark, Torin K.; Young, Laurence R.; Stimpson, Alexander J.; Duda, Kevin R.; Oman, Charles M.

    2011-09-01

    In lunar landing it is necessary to select a suitable landing point and then control a stable descent to the surface. In manned landings, astronauts will play a critical role in monitoring systems and adjusting the descent trajectory through either supervisory control and landing point designations, or by direct manual control. For the astronauts to ensure vehicle performance and safety, they will have to accurately perceive vehicle orientation. A numerical model for human spatial orientation perception was simulated using input motions from lunar landing trajectories to predict the potential for misperceptions. Three representative trajectories were studied: an automated trajectory, a landing point designation trajectory, and a challenging manual control trajectory. These trajectories were studied under three cases with different cues activated in the model to study the importance of vestibular cues, visual cues, and the effect of the descent engine thruster creating dust blowback. The model predicts that spatial misperceptions are likely to occur as a result of the lunar landing motions, particularly with limited or incomplete visual cues. The powered descent acceleration profile creates a somatogravic illusion causing the astronauts to falsely perceive themselves and the vehicle as upright, even when the vehicle has a large pitch or roll angle. When visual pathways were activated within the model these illusions were mostly suppressed. Dust blowback, obscuring the visual scene out the window, was also found to create disorientation. These orientation illusions are likely to interfere with the astronauts' ability to effectively control the vehicle, potentially degrading performance and safety. Therefore suitable countermeasures, including disorientation training and advanced displays, are recommended.

  11. Stereomotion speed perception: contributions from both changing disparity and interocular velocity difference over a range of relative disparities

    NASA Technical Reports Server (NTRS)

    Brooks, Kevin R.; Stone, Leland S.

    2004-01-01

    The role of two binocular cues to motion in depth-changing disparity (CD) and interocular velocity difference (IOVD)- was investigated by measuring stereomotion speed discrimination and static disparity discrimination performance (stereoacuity). Speed discrimination thresholds were assessed both for random dot stereograms (RDS), and for their temporally uncorrelated equivalents, dynamic random dot stereograms (DRDS), at relative disparity pedestals of -19, 0, and +19 arcmin. While RDS stimuli contain both CD and IOVD cues, DRDS stimuli carry only CD information. On average, thresholds were a factor of 1.7 higher for DRDS than for RDS stimuli with no clear effect of relative disparity pedestal. Results were similar for approaching and receding targets. Variations in stimulus duration had no significant effect on thresholds, and there was no observed correlation between stimulus displacement and perceived speed, confirming that subjects responded to stimulus speed in each condition. Stereoacuity was equally good for our RDS and DRDS stimuli, showing that the difference in stereomotion speed discrimination performance for these stimuli was not due to any difference in the precision of the disparity cue. In addition, when we altered stereomotion stimulus trajectory by independently manipulating the speeds and directions of its monocular half-images, perceived stereomotion speed remained accurate. This finding is inconsistent with response strategies based on properties of either monocular half-image motion, or any ad hoc combination of the monocular speeds. We conclude that although subjects are able to discriminate stereomotion speed reliably on the basis of CD information alone, IOVD provides a precise additional cue to stereomotion speed perception.

  12. A nowcasting technique based on application of the particle filter blending algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  13. A rain pixel recovery algorithm for videos with highly dynamic scenes.

    PubMed

    Jie Chen; Lap-Pui Chau

    2014-03-01

    Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.

  14. Models for the Effects of G-seat Cuing on Roll-axis Tracking Performance

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Mcmillan, G. R.; Martin, E. A.

    1984-01-01

    Including whole-body motion in a flight simulator improves performance for a variety of tasks requiring a pilot to compensate for the effects of unexpected disturbances. A possible mechanism for this improvement is that whole-body motion provides high derivative vehicle state information whic allows the pilot to generate more lead in responding to the external disturbances. During development of motion simulating algorithms for an advanced g-cuing system it was discovered that an algorithm based on aircraft roll acceleration producted little or no performance improvement. On the other hand, algorithms based on roll position or roll velocity produced performance equivalent to whole-body motion. The analysis and modeling conducted at both the sensory system and manual control performance levels to explain the above results are described.

  15. Using Passive Sensing to Estimate Relative Energy Expenditure for Eldercare Monitoring

    PubMed Central

    2012-01-01

    This paper describes ongoing work in analyzing sensor data logged in the homes of seniors. An estimation of relative energy expenditure is computed using motion density from passive infrared motion sensors mounted in the environment. We introduce a new algorithm for detecting visitors in the home using motion sensor data and a set of fuzzy rules. The visitor algorithm, as well as a previous algorithm for identifying time-away-from-home (TAFH), are used to filter the logged motion sensor data. Thus, the energy expenditure estimate uses data collected only when the resident is home alone. Case studies are included from TigerPlace, an Aging in Place community, to illustrate how the relative energy expenditure estimate can be used to track health conditions over time. PMID:25266777

  16. Monocular Depth Perception and Robotic Grasping of Novel Objects

    DTIC Science & Technology

    2009-06-01

    resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show

  17. Exogenous attention influences visual short-term memory in infants.

    PubMed

    Ross-Sheehy, Shannon; Oakes, Lisa M; Luck, Steven J

    2011-05-01

    Two experiments examined the hypothesis that developing visual attentional mechanisms influence infants' Visual Short-Term Memory (VSTM) in the context of multiple items. Five- and 10-month-old infants (N = 76) received a change detection task in which arrays of three differently colored squares appeared and disappeared. On each trial one square changed color and one square was cued; sometimes the cued item was the changing item, and sometimes the changing item was not the cued item. Ten-month-old infants exhibited enhanced memory for the cued item when the cue was a spatial pre-cue (Experiment 1) and 5-month-old infants exhibited enhanced memory for the cued item when the cue was relative motion (Experiment 2). These results demonstrate for the first time that infants younger than 6 months can encode information in VSTM about individual items in multiple-object arrays, and that attention-directing cues influence both perceptual and VSTM encoding of stimuli in infants as in adults.

  18. Modeling human perception and estimation of kinematic responses during aircraft landing

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Silk, Anthony B.

    1988-01-01

    The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.

  19. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  20. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  1. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  2. Vestibular signals in primate cortex for self-motion perception.

    PubMed

    Gu, Yong

    2018-04-21

    The vestibular peripheral organs in our inner ears detect transient motion of the head in everyday life. This information is sent to the central nervous system for automatic processes such as vestibulo-ocular reflexes, balance and postural control, and higher cognitive functions including perception of self-motion and spatial orientation. Recent neurophysiological studies have discovered a prominent vestibular network in the primate cerebral cortex. Many of the areas involved are multisensory: their neurons are modulated by both vestibular signals and visual optic flow, potentially facilitating more robust heading estimation through cue integration. Combining psychophysics, computation, physiological recording and causal manipulation techniques, recent work has addressed both the encoding and decoding of vestibular signals for self-motion perception. Copyright © 2018. Published by Elsevier Ltd.

  3. Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline

    PubMed Central

    2013-01-01

    We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026

  4. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  5. Rocking or Rolling – Perception of Ambiguous Motion after Returning from Space

    PubMed Central

    Clément, Gilles; Wood, Scott J.

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1–2 days. During dynamic linear acceleration (0.15–0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore–aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions. PMID:25354042

  6. Rocking or rolling--perception of ambiguous motion after returning from space.

    PubMed

    Clément, Gilles; Wood, Scott J

    2014-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Adaptive changes during spaceflight in how the brain integrates vestibular cues with other sensory information can lead to impaired movement coordination, vertigo, spatial disorientation, and perceptual illusions after return to Earth. The purpose of this study was to compare tilt and translation motion perception in astronauts before and after returning from spaceflight. We hypothesized that these stimuli would be the most ambiguous in the low-frequency range (i.e., at about 0.3 Hz) where the linear acceleration can be interpreted either as a translation or as a tilt relative to gravity. Verbal reports were obtained in eleven astronauts tested using a motion-based tilt-translation device and a variable radius centrifuge before and after flying for two weeks on board the Space Shuttle. Consistent with previous studies, roll tilt perception was overestimated shortly after spaceflight and then recovered with 1-2 days. During dynamic linear acceleration (0.15-0.6 Hz, ±1.7 m/s2) perception of translation was also overestimated immediately after flight. Recovery to baseline was observed after 2 days for lateral translation and 8 days for fore-aft translation. These results suggest that there was a shift in the frequency dynamic of tilt-translation motion perception after adaptation to weightlessness. These results have implications for manual control during landing of a space vehicle after exposure to microgravity, as it will be the case for human asteroid and Mars missions.

  7. Matching cue size and task properties in exogenous attention.

    PubMed

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  8. Differential processing of binocular and monocular gloss cues in human visual cortex

    PubMed Central

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  9. Impact of respiratory-correlated CT sorting algorithms on the choice of margin definition for free-breathing lung radiotherapy treatments.

    PubMed

    Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-06-01

    To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Motion-seeded object-based attention for dynamic visual imagery

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  11. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Matthies, Larry H.

    1998-01-01

    Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.

  12. Orientation Preferences and Motion Sickness Induced in a Virtual Reality Environment.

    PubMed

    Chen, Wei; Chao, Jian-Gang; Zhang, Yan; Wang, Jin-Kun; Chen, Xue-Wen; Tan, Cheng

    2017-10-01

    Astronauts' orientation preferences tend to correlate with their susceptibility to space motion sickness (SMS). Orientation preferences appear universally, since variable sensory cue priorities are used between individuals. However, SMS susceptibility changes after proper training, while orientation preferences seem to be intrinsic proclivities. The present study was conducted to investigate whether orientation preferences change if susceptibility is reduced after repeated exposure to a virtual reality (VR) stimulus environment that induces SMS. A horizontal supine posture was chosen to create a sensory context similar to weightlessness, and two VR devices were used to produce a highly immersive virtual scene. Subjects were randomly allocated to an experimental group (trained through exposure to a provocative rotating virtual scene) and a control group (untrained). All subjects' orientation preferences were measured twice with the same interval, but the experimental group was trained three times during the interval, while the control group was not. Trained subjects were less susceptible to SMS, with symptom scores reduced by 40%. Compared with untrained subjects, trained subjects' orientation preferences were significantly different between pre- and posttraining assessments. Trained subjects depended less on visual cues, whereas few subjects demonstrated the opposite tendency. Results suggest that visual information may be inefficient and unreliable for body orientation and stabilization in a rotating visual scene, while reprioritizing preferences for different sensory cues was dynamic and asymmetric between individuals. The present findings should facilitate customization of efficient and proper training for astronauts with different sensory prioritization preferences and dynamic characteristics.Chen W, Chao J-G, Zhang Y, Wang J-K, Chen X-W, Tan C. Orientation preferences and motion sickness induced in a virtual reality environment. Aerosp Med Hum Perform. 2017; 88(10):903-910.

  13. Validation of an image registration and segmentation method to measure stent graft motion on ECG-gated CT using a physical dynamic stent graft model

    NASA Astrophysics Data System (ADS)

    Koenrades, Maaike A.; Struijs, Ella M.; Klein, Almar; Kuipers, Henny; Geelkerken, Robert H.; Slump, Cornelis H.

    2017-03-01

    The application of endovascular aortic aneurysm repair has expanded over the last decade. However, the long-term performance of stent grafts, in particular durable fixation and sealing to the aortic wall, remains the main concern of this treatment. The sealing and fixation are challenged at every heartbeat due to downward and radial pulsatile forces. Yet knowledge on cardiac-induced dynamics of implanted stent grafts is sparse, as it is not measured in routine clinical follow-up. Such knowledge is particularly relevant to perform fatigue tests, to predict failure in the individual patient and to improve stent graft designs. Using a physical dynamic stent graft model in an anthropomorphic phantom, we have evaluated the performance of our previously proposed segmentation and registration algorithm to detect periodic motion of stent grafts on ECG-gated (3D+t) CT data. Abdominal aortic motion profiles were simulated in two series of Gaussian based patterns with different amplitudes and frequencies. Experiments were performed on a 64-slice CT scanner with a helical scan protocol and retrospective gating. Motion patterns as estimated by our algorithm were compared to motion patterns obtained from optical camera recordings of the physical stent graft model in motion. Absolute errors of the patterns' amplitude were smaller than 0.28 mm. Even the motion pattern with an amplitude of 0.23 mm was measured, although the amplitude of motion was overestimated by the algorithm with 43%. We conclude that the algorithm performs well for measurement of stent graft motion in the mm and sub-mm range. This ultimately is expected to aid in patient-specific risk assessment and improving stent graft designs.

  14. The Impact of Older Age and Sex on Motion Discrimination.

    PubMed

    Conlon, Elizabeth G; Power, Garry F; Hine, Trevor J; Rahaley, Nicole

    2017-01-01

    Background/Study Context: Reports of age-related differences on motion discrimination tasks have produced inconsistent findings concerning the influence of sex. Some studies have reported that older women have higher thresholds than older men, with others finding that women have higher motion thresholds regardless of age group. Reports of the age at which declines in motion discrimination first occur also differ, with some studies reporting declines only in groups aged over 70 years, with others reporting that age-related decline occurs at a younger age. The current study aimed to determine whether the sex differences found occur because relative to men, women have greater difficulty extracting motion signals from noise (Experiment 1) or have greater difficulty making use of the available motion cues (Experiment 2) in these complex moving stimuli. In addition, the influence of these manipulations on groups aged under and over 70 years was explored. Motion discrimination measures were obtained using 39 older adults aged between 60 and 85 years (21 women) and 40 younger adults aged between 20 and 45 years (20 women). In Experiment 1, coherent motion and relative motion displacement thresholds were obtained. In Experiment 2, coherent motion thresholds were obtained for stimuli containing either 150 or 600 dots. In Experiment 1, the older group had significantly higher thresholds on the relative motion displacement and coherent motion tasks than a younger group. No differences in motion sensitivity were found in the older groups aged under or over 70 years. Women regardless of age group had significantly higher thresholds than men on both tasks. In Experiment 2, the older group had higher coherence thresholds than the younger group, and the number of dots presented had no influence on thresholds, for the older group or older women specifically. In the younger group, women had higher coherence thresholds than men with presentation of 150 but not 600 dots. There were 51% of the older group who showed evidence of age-related decline on all the motion coherence tasks conducted, with half of these in each the group aged under and over 70 years. Difficulties with noise exclusion failed to explain the sex differences found. The increased number of motion cues present when a larger number of dots were included was sufficient to reduce coherence thresholds in younger women but not older men or women. In addition to age, developmental history and sex may provide further predictors in older individuals of decline on measures of motion discrimination.

  15. Constrained motion model of mobile robots and its applications.

    PubMed

    Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong

    2009-06-01

    Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.

  16. Auditorily-induced illusory self-motion: a review.

    PubMed

    Väljamäe, Aleksander

    2009-10-01

    The aim of this paper is to provide a first review of studies related to auditorily-induced self-motion (vection). These studies have been scarce and scattered over the years and over several research communities including clinical audiology, multisensory perception of self-motion and its neural correlates, ergonomics, and virtual reality. The reviewed studies provide evidence that auditorily-induced vection has behavioral, physiological and neural correlates. Although the sound contribution to self-motion perception appears to be weaker than the visual modality, specific acoustic cues appear to be instrumental for a number of domains including posture prosthesis, navigation in unusual gravitoinertial environments (in the air, in space, or underwater), non-visual navigation, and multisensory integration during self-motion. A number of open research questions are highlighted opening avenue for more active and systematic studies in this area.

  17. Vestibular selection criteria development. [assessing susceptability to motion sickness during orbital space flight

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.

    1981-01-01

    The experimental elicitation of motion sickness using a short arm centrifuge or a rotating chair surrounded by a striped cylindrical enclosure failed to reveal any systematic group or consistent individual relationship between changes in heart rate, blood pressure, and body temperature and the appearance of symptoms of motion sickness. A study of the influence of vision on susceptability to motion sickness during sudden stop simulation shows that having the eyes open during any part of the sudden stop assessment is more stressful than having them closed throughout the test. Subjects were found to be highly susceptible to motion sickness when tested in free fall and in high force phases of flight. The effect of touch and pressure cues on body orientation during rotation and in parabolic flight are considered as sensory as well as motor adaptation.

  18. Restoration of motion blurred image with Lucy-Richardson algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jing; Liu, Zhao Hui; Zhou, Liang

    2015-10-01

    Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR's). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.

  19. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.

    PubMed

    Ci, Wenyan; Huang, Yingping

    2016-10-17

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.

  20. A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

    PubMed Central

    Ci, Wenyan; Huang, Yingping

    2016-01-01

    Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508

  1. Pose and motion recovery from feature correspondences and a digital terrain map.

    PubMed

    Lerner, Ronen; Rivlin, Ehud; Rotstein, Héctor P

    2006-09-01

    A novel algorithm for pose and motion estimation using corresponding features and a Digital Terrain Map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables the elimination of the ambiguity present in vision-based algorithms for motion recovery. As a consequence, the absolute position and orientation of a camera can be recovered with respect to the external reference frame. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. Explicit reconstruction of the 3D world is not required. When considering a number of feature points, the resulting constraints can be solved using nonlinear optimization in terms of position, orientation, and motion. Such a procedure requires an initial guess of these parameters, which can be obtained from dead-reckoning or any other source. The feasibility of the algorithm is established through extensive experimentation. Performance is compared with a state-of-the-art alternative algorithm, which intermediately reconstructs the 3D structure and then registers it to the DTM. A clear advantage for the novel algorithm is demonstrated in variety of scenarios.

  2. List-mode reconstruction for the Biograph mCT with physics modeling and event-by-event motion correction

    NASA Astrophysics Data System (ADS)

    Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.

    2013-08-01

    Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided with accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32 bit packets, where averaging of lines-of-response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic LOR (pLOR) position technique that addresses axial and transaxial LOR grouping in 32 bit data. Second, two simplified approaches for 3D time-of-flight (TOF) scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + TOF (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32 bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction.

  3. List-mode Reconstruction for the Biograph mCT with Physics Modeling and Event-by-Event Motion Correction

    PubMed Central

    Jin, Xiao; Chan, Chung; Mulnix, Tim; Panin, Vladimir; Casey, Michael E.; Liu, Chi; Carson, Richard E.

    2013-01-01

    Whole-body PET/CT scanners are important clinical and research tools to study tracer distribution throughout the body. In whole-body studies, respiratory motion results in image artifacts. We have previously demonstrated for brain imaging that, when provided accurate motion data, event-by-event correction has better accuracy than frame-based methods. Therefore, the goal of this work was to develop a list-mode reconstruction with novel physics modeling for the Siemens Biograph mCT with event-by-event motion correction, based on the MOLAR platform (Motion-compensation OSEM List-mode Algorithm for Resolution-Recovery Reconstruction). Application of MOLAR for the mCT required two algorithmic developments. First, in routine studies, the mCT collects list-mode data in 32-bit packets, where averaging of lines of response (LORs) by axial span and angular mashing reduced the number of LORs so that 32 bits are sufficient to address all sinogram bins. This degrades spatial resolution. In this work, we proposed a probabilistic assignment of LOR positions (pLOR) that addresses axial and transaxial LOR grouping in 32-bit data. Second, two simplified approaches for 3D TOF scatter estimation were developed to accelerate the computationally intensive calculation without compromising accuracy. The proposed list-mode reconstruction algorithm was compared to the manufacturer's point spread function + time-of-flight (PSF+TOF) algorithm. Phantom, animal, and human studies demonstrated that MOLAR with pLOR gives slightly faster contrast recovery than the PSF+TOF algorithm that uses the average 32-bit LOR sinogram positioning. Moving phantom and a whole-body human study suggested that event-by-event motion correction reduces image blurring caused by respiratory motion. We conclude that list-mode reconstruction with pLOR positioning provides a platform to generate high quality images for the mCT, and to recover fine structures in whole-body PET scans through event-by-event motion correction. PMID:23892635

  4. WE-AB-204-09: Respiratory Motion Correction in 4D-PET by Simultaneous Motion Estimation and Image Reconstruction (SMEIR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalantari, F; Wang, J; Li, T

    2015-06-15

    Purpose: In conventional 4D-PET, images from different frames are reconstructed individually and aligned by registration methods. Two issues with these approaches are: 1) Reconstruction algorithms do not make full use of all projections statistics; and 2) Image registration between noisy images can Result in poor alignment. In this study we investigated the use of simultaneous motion estimation and image reconstruction (SMEIR) method for cone beam CT for motion estimation/correction in 4D-PET. Methods: Modified ordered-subset expectation maximization algorithm coupled with total variation minimization (OSEM- TV) is used to obtain a primary motion-compensated PET (pmc-PET) from all projection data using Demons derivedmore » deformation vector fields (DVFs) as initial. Motion model update is done to obtain an optimal set of DVFs between the pmc-PET and other phases by matching the forward projection of the deformed pmc-PET and measured projections of other phases. Using updated DVFs, OSEM- TV image reconstruction is repeated and new DVFs are estimated based on updated images. 4D XCAT phantom with typical FDG biodistribution and a 10mm diameter tumor was used to evaluate the performance of the SMEIR algorithm. Results: Image quality of 4D-PET is greatly improved by the SMEIR algorithm. When all projections are used to reconstruct a 3D-PET, motion blurring artifacts are present, leading to a more than 5 times overestimation of the tumor size and 54% tumor to lung contrast ratio underestimation. This error reduced to 37% and 20% for post reconstruction registration methods and SMEIR respectively. Conclusion: SMEIR method can be used for motion estimation/correction in 4D-PET. The statistics is greatly improved since all projection data are combined together to update the image. The performance of the SMEIR algorithm for 4D-PET is sensitive to smoothness control parameters in the DVF estimation step.« less

  5. Object instance recognition using motion cues and instance specific appearance models

    NASA Astrophysics Data System (ADS)

    Schumann, Arne

    2014-03-01

    In this paper we present an object instance retrieval approach. The baseline approach consists of a pool of image features which are computed on the bounding boxes of a query object track and compared to a database of tracks in order to find additional appearances of the same object instance. We improve over this simple baseline approach in multiple ways: 1) we include motion cues to achieve improved robustness to viewpoint and rotation changes, 2) we include operator feedback to iteratively re-rank the resulting retrieval lists and 3) we use operator feedback and location constraints to train classifiers and learn an instance specific appearance model. We use these classifiers to further improve the retrieval results. The approach is evaluated on two popular public datasets for two different applications. We evaluate person re-identification on the CAVIAR shopping mall surveillance dataset and vehicle instance recognition on the VIVID aerial dataset and achieve significant improvements over our baseline results.

  6. Binocular disparities, motion parallax, and geometric perspective in Patrick Hughes's 'reverspectives': theoretical analysis and empirical findings.

    PubMed

    Rogers, Brian; Gyani, Alex

    2010-01-01

    Abstract. Patrick Hughes's 'reverspective' artworks provide a novel way of investigating the effectiveness of different sources of 3-D information for the human visual system. Our empirical findings show that the converging lines of simple linear perspective can be as effective as the rich array of 3-D cues present in natural scenes in determining what we see, even when these cues are in conflict with binocular disparities. Theoretical considerations reveal that, once the information provided by motion parallax transformations is correctly understood, there is no need to invoke higher-level processes or an interpretation based on familiarity or past experience in order to explain either the 'reversed' depth or the apparent, concomitant rotation of a reverspective artwork as the observer moves from side to side. What we see in reverspectives is the most likely real-world scenario (distal stimulus) that could have created the perspective and parallax transformations (proximal stimulus) that stimulate our visual systems.

  7. Neural basis of the cognitive map: path integration does not require hippocampus or entorhinal cortex.

    PubMed

    Shrager, Yael; Kirwan, C Brock; Squire, Larry R

    2008-08-19

    The hippocampus and entorhinal cortex have been linked to both memory functions and to spatial cognition, but it has been unclear how these ideas relate to each other. An important part of spatial cognition is the ability to keep track of a reference location using self-motion cues (sometimes referred to as path integration), and it has been suggested that the hippocampus or entorhinal cortex is essential for this ability. Patients with hippocampal lesions or larger lesions that also included entorhinal cortex were led on paths while blindfolded (up to 15 m in length) and were asked to actively maintain the path in mind. Patients pointed to and estimated their distance from the start location as accurately as controls. A rotation condition confirmed that performance was based on self-motion cues. When demands on long-term memory were increased, patients were impaired. Thus, in humans, the hippocampus and entorhinal cortex are not essential for path integration.

  8. Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow

    PubMed Central

    Katsuyama, Narumi; Usui, Nobuo; Taira, Masato

    2016-01-01

    A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999

  9. Typical use of inverse dynamics in perceiving motion in autistic adults: Exploring computational principles of perception and action.

    PubMed

    Takamuku, Shinya; Forbes, Paul A G; Hamilton, Antonia F de C; Gomi, Hiroaki

    2018-05-07

    There is increasing evidence for motor difficulties in many people with autism spectrum condition (ASC). These difficulties could be linked to differences in the use of internal models which represent relations between motions and forces/efforts. The use of these internal models may be dependent on the cerebellum which has been shown to be abnormal in autism. Several studies have examined internal computations of forward dynamics (motion from force information) in autism, but few have tested the inverse dynamics computation, that is, the determination of force-related information from motion information. Here, we examined this ability in autistic adults by measuring two perceptual biases which depend on the inverse computation. First, we asked participants whether they experienced a feeling of resistance when moving a delayed cursor, which corresponds to the inertial force of the cursor implied by its motion-both typical and ASC participants reported similar feelings of resistance. Second, participants completed a psychophysical task in which they judged the velocity of a moving hand with or without a visual cue implying inertial force. Both typical and ASC participants perceived the hand moving with the inertial cue to be slower than the hand without it. In both cases, the magnitude of the effects did not differ between the two groups. Our results suggest that the neural systems engaged in the inverse dynamics computation are preserved in ASC, at least in the observed conditions. Autism Res 2018. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. We tested the ability to estimate force information from motion information, which arises from a specific "inverse dynamics" computation. Autistic adults and a matched control group reported feeling a resistive sensation when moving a delayed cursor and also judged a moving hand to be slower when it was pulling a load. These findings both suggest that the ability to estimate force information from motion information is intact in autism. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.

  10. Sensorimotor Model of Obstacle Avoidance in Echolocating Bats

    PubMed Central

    Vanderelst, Dieter; Holderied, Marc W.; Peremans, Herbert

    2015-01-01

    Bat echolocation is an ability consisting of many subtasks such as navigation, prey detection and object recognition. Understanding the echolocation capabilities of bats comes down to isolating the minimal set of acoustic cues needed to complete each task. For some tasks, the minimal cues have already been identified. However, while a number of possible cues have been suggested, little is known about the minimal cues supporting obstacle avoidance in echolocating bats. In this paper, we propose that the Interaural Intensity Difference (IID) and travel time of the first millisecond of the echo train are sufficient cues for obstacle avoidance. We describe a simple control algorithm based on the use of these cues in combination with alternating ear positions modeled after the constant frequency bat Rhinolophus rouxii. Using spatial simulations (2D and 3D), we show that simple phonotaxis can steer a bat clear from obstacles without performing a reconstruction of the 3D layout of the scene. As such, this paper presents the first computationally explicit explanation for obstacle avoidance validated in complex simulated environments. Based on additional simulations modelling the FM bat Phyllostomus discolor, we conjecture that the proposed cues can be exploited by constant frequency (CF) bats and frequency modulated (FM) bats alike. We hypothesize that using a low level yet robust cue for obstacle avoidance allows bats to comply with the hard real-time constraints of this basic behaviour. PMID:26502063

  11. Exact Fan-Beam Reconstruction With Arbitrary Object Translations and Truncated Projections

    NASA Astrophysics Data System (ADS)

    Hoskovec, Jan; Clackdoyle, Rolf; Desbat, Laurent; Rit, Simon

    2016-06-01

    This article proposes a new method for reconstructing two-dimensional (2D) computed tomography (CT) images from truncated and motion contaminated sinograms. The type of motion considered here is a sequence of rigid translations which are assumed to be known. The algorithm first identifies the sufficiency of angular coverage in each 2D point of the CT image to calculate the Hilbert transform from the local “virtual” trajectory which accounts for the motion and the truncation. By taking advantage of data redundancy in the full circular scan, our method expands the reconstructible region beyond the one obtained with chord-based methods. The proposed direct reconstruction algorithm is based on the Differentiated Back-Projection with Hilbert filtering (DBP-H). The motion is taken into account during backprojection which is the first step of our direct reconstruction, before taking the derivatives and inverting the finite Hilbert transform. The algorithm has been tested in a proof-of-concept study on Shepp-Logan phantom simulations with several motion cases and detector sizes.

  12. An error-based micro-sensor capture system for real-time motion estimation

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li

    2017-10-01

    A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).

  13. Bounded Kalman filter method for motion-robust, non-contact heart rate estimation

    PubMed Central

    Prakash, Sakthi Kumar Arul; Tucker, Conrad S.

    2018-01-01

    The authors of this work present a real-time measurement of heart rate across different lighting conditions and motion categories. This is an advancement over existing remote Photo Plethysmography (rPPG) methods that require a static, controlled environment for heart rate detection, making them impractical for real-world scenarios wherein a patient may be in motion, or remotely connected to a healthcare provider through telehealth technologies. The algorithm aims to minimize motion artifacts such as blurring and noise due to head movements (uniform, random) by employing i) a blur identification and denoising algorithm for each frame and ii) a bounded Kalman filter technique for motion estimation and feature tracking. A case study is presented that demonstrates the feasibility of the algorithm in non-contact estimation of the pulse rate of subjects performing everyday head and body movements. The method in this paper outperforms state of the art rPPG methods in heart rate detection, as revealed by the benchmarked results. PMID:29552419

  14. Spacelab experiments on space motion sickness

    NASA Technical Reports Server (NTRS)

    Oman, C. M.

    1987-01-01

    Recent research results from ground and flight experiments on motion sickness and space sickness conducted by the Man Vehicle Laboratory are reviewed. New tools developed include a mathematical model for motion sickness, a method for quantitative measurements of skin pallor and blush in ambulatory subjects, and a magnitude estimation technique for ratio scaling of nausea or discomfort. These have been used to experimentally study the time course of skin pallor and subjective symptoms in laboratory motion sickness. In prolonged sickness, subjects become hypersensitive to nauseogenic stimuli. Results of a Spacelab-1 flight experiment are described in which four observers documented the stimulus factors for and the symptoms/signs of space sickness. The clinical character of space sickness differs somewhat from acute laboratory motion sickness. However SL-1 findings support the view that space sickness is fundamentally a motion sickness. Symptoms were subjectively alleviated by head movement restriction, maintenance of a familiar orientation with respect to the visual environment, and wedging between or strapping onto surfaces which provided broad contact cues confirming the absence of body motion.

  15. Spacelab experiments on space motion sickness

    NASA Technical Reports Server (NTRS)

    Oman, C. M.

    1985-01-01

    Recent research results from ground and flight experiments on motion sickness and space sickness conducted by the Man Vehicle Laboratory are reviewed. New tools developed include a mathematical model for motion sickness, a method for quantitative measurement of skin pallor and blush in ambulatory subjects, and a magnitude estimation technique for ratio scaling of nausea or discomfort. These have been used to experimentally study the time course of skin pallor and subjective symptoms in laboratory motion sickness. In prolonged sickness, subjects become hypersensitive to nauseogenic stimuli. Results of a Spacelab-1 flight experiment are described in which 4 observers documented the stimulus factors for and the symptoms/signs of space sickness. The clinical character of space sickness differs somewhat from acute laboratory motion sickness. However SL-1 findings support the view that space sickness is fundamentally a motion sickness. Symptoms were subjectively alleviated by head movement restriction, maintenance of a familiar orientation with respect to the visual environment, and wedging between or strapping onto surfaces which provided broad contact cues confirming the absence of body motion.

  16. Spacelab experiments on space motion sickness.

    PubMed

    Oman, C M

    1987-01-01

    Recent research results from ground and flight experiments on motion sickness and space sickness conducted by the Man Vehicle Laboratory are reviewed. New tools developed include a mathematical model for motion sickness, a method for quantitative measurements of skin pallor and blush in ambulatory subjects, and a magnitude estimation technique for ratio scaling of nausea or discomfort. These have been used to experimentally study the time course of skin pallor and subjective symptoms in laboratory motion sickness. In prolonged sickness, subjects become hypersensitive to nauseogenic stimuli. Results of a Spacelab-1 flight experiment are described in which four observers documented the stimulus factors for and the symptoms/signs of space sickness. The clinical character of space sickness differs somewhat from acute laboratory motion sickness. However SL-1 findings support the view that space sickness is fundamentally a motion sickness. Symptoms were subjectively alleviated by head movement restriction, maintenance of a familiar orientation with respect to the visual environment, and wedging between or strapping onto surfaces which provided broad contact cues confirming the absence of body motion.

  17. An accurate algorithm to calculate the Hurst exponent of self-similar processes

    NASA Astrophysics Data System (ADS)

    Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.; Román-Sánchez, I. M.

    2014-06-01

    In this paper, we introduce a new approach which generalizes the GM2 algorithm (introduced in Sánchez-Granero et al. (2008) [52]) as well as fractal dimension algorithms (FD1, FD2 and FD3) (first appeared in Sánchez-Granero et al. (2012) [51]), providing an accurate algorithm to calculate the Hurst exponent of self-similar processes. We prove that this algorithm performs properly in the case of short time series when fractional Brownian motions and Lévy stable motions are considered. We conclude the paper with a dynamic study of the Hurst exponent evolution in the S&P500 index stocks.

  18. Experimental verification of a 4D MLEM reconstruction algorithm used for in-beam PET measurements in particle therapy

    NASA Astrophysics Data System (ADS)

    Stützer, K.; Bert, C.; Enghardt, W.; Helmbrecht, S.; Parodi, K.; Priegnitz, M.; Saito, N.; Fiedler, F.

    2013-08-01

    In-beam positron emission tomography (PET) has been proven to be a reliable technique in ion beam radiotherapy for the in situ and non-invasive evaluation of the correct dose deposition in static tumour entities. In the presence of intra-fractional target motion an appropriate time-resolved (four-dimensional, 4D) reconstruction algorithm has to be used to avoid reconstructed activity distributions suffering from motion-related blurring artefacts and to allow for a dedicated dose monitoring. Four-dimensional reconstruction algorithms from diagnostic PET imaging that can properly handle the typically low counting statistics of in-beam PET data have been adapted and optimized for the characteristics of the double-head PET scanner BASTEI installed at GSI Helmholtzzentrum Darmstadt, Germany (GSI). Systematic investigations with moving radioactive sources demonstrate the more effective reduction of motion artefacts by applying a 4D maximum likelihood expectation maximization (MLEM) algorithm instead of the retrospective co-registration of phasewise reconstructed quasi-static activity distributions. Further 4D MLEM results are presented from in-beam PET measurements of irradiated moving phantoms which verify the accessibility of relevant parameters for the dose monitoring of intra-fractionally moving targets. From in-beam PET listmode data sets acquired together with a motion surrogate signal, valuable images can be generated by the 4D MLEM reconstruction for different motion patterns and motion-compensated beam delivery techniques.

  19. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    PubMed

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  20. Live Speech Driven Head-and-Eye Motion Generators.

    PubMed

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  1. Joint attention enhances visual working memory.

    PubMed

    Gregory, Samantha E A; Jackson, Margaret C

    2017-02-01

    Joint attention-the mutual focus of 2 individuals on an item-speeds detection and discrimination of target information. However, what happens to that information beyond the initial perceptual episode? To fully comprehend and engage with our immediate environment also requires working memory (WM), which integrates information from second to second to create a coherent and fluid picture of our world. Yet, no research exists at present that examines how joint attention directly impacts WM. To investigate this, we created a unique paradigm that combines gaze cues with a traditional visual WM task. A central, direct gaze 'cue' face looked left or right, followed 500 ms later by 4, 6, or 8 colored squares presented on one side of the face for encoding. Crucially, the cue face either looked at the squares (valid cue) or looked away from them (invalid cue). A no shift (direct gaze) condition served as a baseline. After a blank 1,000 ms maintenance interval, participants stated whether a single test square color was present or not in the preceding display. WM accuracy was significantly greater for colors encoded in the valid versus invalid and direct conditions. Further experiments showed that an arrow cue and a low-level motion cue-both shown to reliably orient attention-did not reliably modulate WM, indicating that social cues are more powerful. This study provides the first direct evidence that sharing the focus of another individual establishes a point of reference from which information is advantageously encoded into WM. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. A tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel independent brake from moderate driving to limit handling

    NASA Astrophysics Data System (ADS)

    Joa, Eunhyek; Park, Kwanwoo; Koh, Youngil; Yi, Kyongsu; Kim, Kilsoo

    2018-04-01

    This paper presents a tyre slip-based integrated chassis control of front/rear traction distribution and four-wheel braking for enhanced performance from moderate driving to limit handling. The proposed algorithm adopted hierarchical structure: supervisor - desired motion tracking controller - optimisation-based control allocation. In the supervisor, by considering transient cornering characteristics, desired vehicle motion is calculated. In the desired motion tracking controller, in order to track desired vehicle motion, virtual control input is determined in the manner of sliding mode control. In the control allocation, virtual control input is allocated to minimise cost function. The cost function consists of two major parts. First part is a slip-based tyre friction utilisation quantification, which does not need a tyre force estimation. Second part is an allocation guideline, which guides optimally allocated inputs to predefined solution. The proposed algorithm has been investigated via simulation from moderate driving to limit handling scenario. Compared to Base and direct yaw moment control system, the proposed algorithm can effectively reduce tyre dissipation energy in the moderate driving situation. Moreover, the proposed algorithm enhances limit handling performance compared to Base and direct yaw moment control system. In addition to comparison with Base and direct yaw moment control, comparison the proposed algorithm with the control algorithm based on the known tyre force information has been conducted. The results show that the performance of the proposed algorithm is similar with that of the control algorithm with the known tyre force information.

  3. LCD motion blur reduction: a signal processing approach.

    PubMed

    Har-Noy, Shay; Nguyen, Truong Q

    2008-02-01

    Liquid crystal displays (LCDs) have shown great promise in the consumer market for their use as both computer and television displays. Despite their many advantages, the inherent sample-and-hold nature of LCD image formation results in a phenomenon known as motion blur. In this work, we develop a method for motion blur reduction using the Richardson-Lucy deconvolution algorithm in concert with motion vector information from the scene. We further refine our approach by introducing a perceptual significance metric that allows us to weight the amount of processing performed on different regions in the image. In addition, we analyze the role of motion vector errors in the quality of our resulting image. Perceptual tests indicate that our algorithm reduces the amount of perceivable motion blur in LCDs.

  4. Algorithm and code development for unsteady three-dimensional Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru

    1991-01-01

    A streamwise upwind algorithm for solving the unsteady 3-D Navier-Stokes equations was extended to handle the moving grid system. It is noted that the finite volume concept is essential to extend the algorithm. The resulting algorithm is conservative for any motion of the coordinate system. Two extensions to an implicit method were considered and the implicit extension that makes the algorithm computationally efficient is implemented into Ames's aeroelasticity code, ENSAERO. The new flow solver has been validated through the solution of test problems. Test cases include three-dimensional problems with fixed and moving grids. The first test case shown is an unsteady viscous flow over an F-5 wing, while the second test considers the motion of the leading edge vortex as well as the motion of the shock wave for a clipped delta wing. The resulting algorithm has been implemented into ENSAERO. The upwind version leads to higher accuracy in both steady and unsteady computations than the previously used central-difference method does, while the increase in the computational time is small.

  5. Differential processing of binocular and monocular gloss cues in human visual cortex.

    PubMed

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  6. Self-referential forces are sufficient to explain different dendritic morphologies

    PubMed Central

    Memelli, Heraldo; Torben-Nielsen, Benjamin; Kozloski, James

    2013-01-01

    Dendritic morphology constrains brain activity, as it determines first which neuronal circuits are possible and second which dendritic computations can be performed over a neuron's inputs. It is known that a range of chemical cues can influence the final shape of dendrites during development. Here, we investigate the extent to which self-referential influences, cues generated by the neuron itself, might influence morphology. To this end, we developed a phenomenological model and algorithm to generate virtual morphologies, which are then compared to experimentally reconstructed morphologies. In the model, branching probability follows a Galton–Watson process, while the geometry is determined by “homotypic forces” exerting influence on the direction of random growth in a constrained space. We model three such homotypic forces, namely an inertial force based on membrane stiffness, a soma-oriented tropism, and a force of self-avoidance, as directional biases in the growth algorithm. With computer simulations we explored how each bias shapes neuronal morphologies. We show that based on these principles, we can generate realistic morphologies of several distinct neuronal types. We discuss the extent to which homotypic forces might influence real dendritic morphologies, and speculate about the influence of other environmental cues on neuronal shape and circuitry. PMID:23386828

  7. Visual Displays and Contextual Presentations in Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Park, Ok-choon

    1998-01-01

    Investigates the effects of two instructional strategies, visual display (animation, and static graphics with and without motion cues) and contextual presentation, in the acquisition of electronic troubleshooting skills using computer-based instruction. Study concludes that use of visual displays and contextual presentation be based on the…

  8. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  9. Model-free iterative control of repetitive dynamics for high-speed scanning in atomic force microscopy.

    PubMed

    Li, Yang; Bechhoefer, John

    2009-01-01

    We introduce an algorithm for calculating, offline or in real time and with no explicit system characterization, the feedforward input required for repetitive motions of a system. The algorithm is based on the secant method of numerical analysis and gives accurate motion at frequencies limited only by the signal-to-noise ratio and the actuator power and range. We illustrate the secant-solver algorithm on a stage used for atomic force microscopy.

  10. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  11. Modeling human pilot cue utilization with applications to simulator fidelity assessment.

    PubMed

    Zeyada, Y; Hess, R A

    2000-01-01

    An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.

  12. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  13. Collision detection for spacecraft proximity operations

    NASA Technical Reports Server (NTRS)

    Vaughan, Robin M.; Bergmann, Edward V.; Walker, Bruce K.

    1991-01-01

    A new collision detection algorithm has been developed for use when two spacecraft are operating in the same vicinity. The two spacecraft are modeled as unions of convex polyhedra, where the resulting polyhedron many be either convex or nonconvex. The relative motion of the two spacecraft is assumed to be such that one vehicle is moving with constant linear and angular velocity with respect to the other. Contacts between the vertices, faces, and edges of the polyhedra representing the two spacecraft are shown to occur when the value of one or more of a set of functions is zero. The collision detection algorithm is then formulated as a search for the zeros (roots) of these functions. Special properties of the functions for the assumed relative trajectory are exploited to expedite the zero search. The new algorithm is the first algorithm that can solve the collision detection problem exactly for relative motion with constant angular velocity. This is a significant improvement over models of rotational motion used in previous collision detection algorithms.

  14. The effects of attractive vs. repulsive instructional cuing on balance performance.

    PubMed

    Kinnaird, Catherine; Lee, Jaehong; Carender, Wendy J; Kabeto, Mohammed; Martin, Bernard; Sienko, Kathleen H

    2016-03-16

    Torso-based vibrotactile feedback has been shown to improve postural performance during quiet and perturbed stance in healthy young and older adults and individuals with balance impairments. These systems typically include tactors distributed around the torso that are activated when body motion exceeds a predefined threshold. Users are instructed to "move away from the vibration". However, recent studies have shown that in the absence of instructions, vibrotactile stimulation induces small (~1°) non-volitional responses in the direction of its application location. It was hypothesized that an attractive cuing strategy (i.e., "move toward the vibration") could improve postural performance by leveraging this natural tendency. Eight healthy older adults participated in two non-consecutive days of computerized dynamic posturography testing while wearing a vibrotactile feedback system comprised of an inertial measurement unit and four tactors that were activated in pairs when body motion exceeded 1° anteriorly or posteriorly. A crossover design was used. On each day participants performed 24 repetitions of Sensory Organization Test condition 5 (SOT5), three repetitions each of SOT 1-6, three repetitions of the Motor Control Test, and five repetitions of the Adaptation Test. Performance metrics included A/P RMS, Time-in-zone and 95 % CI Ellipse. Performance improved with both cuing strategies but participants performed better when using repulsive cues. However, the rate of improvement was greater for attractive versus repulsive cuing. The results suggest that when the cutaneous signal is interpreted as an alarm, cognition overrides sensory information. Furthermore, although repulsive cues resulted in better performance, attractive cues may be as good, if not better, than repulsive cues following extended training.

  15. New inverse synthetic aperture radar algorithm for translational motion compensation

    NASA Astrophysics Data System (ADS)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  16. A comparative evaluation of adaptive noise cancellation algorithms for minimizing motion artifacts in a forehead-mounted wearable pulse oximeter.

    PubMed

    Comtois, Gary; Mendelson, Yitzhak; Ramuka, Piyush

    2007-01-01

    Wearable physiological monitoring using a pulse oximeter would enable field medics to monitor multiple injuries simultaneously, thereby prioritizing medical intervention when resources are limited. However, a primary factor limiting the accuracy of pulse oximetry is poor signal-to-noise ratio since photoplethysmographic (PPG) signals, from which arterial oxygen saturation (SpO2) and heart rate (HR) measurements are derived, are compromised by movement artifacts. This study was undertaken to quantify SpO2 and HR errors induced by certain motion artifacts utilizing accelerometry-based adaptive noise cancellation (ANC). Since the fingers are generally more vulnerable to motion artifacts, measurements were performed using a custom forehead-mounted wearable pulse oximeter developed for real-time remote physiological monitoring and triage applications. This study revealed that processing motion-corrupted PPG signals by least mean squares (LMS) and recursive least squares (RLS) algorithms can be effective to reduce SpO2 and HR errors during jogging, but the degree of improvement depends on filter order. Although both algorithms produced similar improvements, implementing the adaptive LMS algorithm is advantageous since it requires significantly less operations.

  17. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew Edie; Matthies, Larry H.

    2000-01-01

    We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.

  18. Processing Ultra Wide Band Synthetic Aperture Radar Data with Motion Detectors

    NASA Technical Reports Server (NTRS)

    Madsen, Soren Norvang

    1996-01-01

    Several issues makes the processing of ultra wide band (UWB) SAR data acquired from an airborne platform difficult. The character of UWB data invalidates many of the usual SAR batch processing techniques, leading to the application of wavenumber domain type processors...This paper will suggest and evaluate an algorithm which combines a wavenumber domain processing algorithm with a motion compensation procedure which enables motion compensation to be applied as a function of target range and the azimuth angle.

  19. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    NASA Astrophysics Data System (ADS)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.

    2012-02-01

    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  20. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

Top