Sample records for visual motion simulator

  1. Motion/visual cueing requirements for vortex encounters during simulated transport visual approach and landing

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Bowles, R. L.

    1983-01-01

    This paper addresses the issues of motion/visual cueing fidelity requirements for vortex encounters during simulated transport visual approaches and landings. Four simulator configurations were utilized to provide objective performance measures during simulated vortex penetrations, and subjective comments from pilots were collected. The configurations used were as follows: fixed base with visual degradation (delay), fixed base with no visual degradation, moving base with visual degradation (delay), and moving base with no visual degradation. The statistical comparisons of the objective measures and the subjective pilot opinions indicated that although both minimum visual delay and motion cueing are recommended for the vortex penetration task, the visual-scene delay characteristics were not as significant a fidelity factor as was the presence of motion cues. However, this indication was applicable to a restricted task, and to transport aircraft. Although they were statistically significant, the effects of visual delay and motion cueing on the touchdown-related measures were considered to be of no practical consequence.

  2. Effects of visual and motion simulation cueing systems on pilot performance during takeoffs with engine failures

    NASA Technical Reports Server (NTRS)

    Parris, B. L.; Cook, A. M.

    1978-01-01

    Data are presented that show the effects of visual and motion during cueing on pilot performance during takeoffs with engine failures. Four groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The most basic of these systems was of the instrument-only type. Visual scene simulation and/or motion simulation was added to produce the other systems. Learning curves, mean performance, and subjective data are examined. The results show that the addition of visual cueing results in significant improvement in pilot performance, but the combined use of visual and motion cueing results in far better performance.

  3. Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations

    NASA Technical Reports Server (NTRS)

    Mitchell, David G.; Hart, Daniel C.

    1993-01-01

    The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.

  4. The contribution of visual and proprioceptive information to the perception of leaning in a dynamic motorcycle simulator.

    PubMed

    Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice

    2016-11-01

    Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.

  5. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  6. Helicopter flight simulation motion platform requirements

    NASA Astrophysics Data System (ADS)

    Schroeder, Jeffery Allyn

    Flight simulators attempt to reproduce in-flight pilot-vehicle behavior on the ground. This reproduction is challenging for helicopter simulators, as the pilot is often inextricably dependent on external cues for pilot-vehicle stabilization. One important simulator cue is platform motion; however, its required fidelity is unknown. To determine the required motion fidelity, several unique experiments were performed. A large displacement motion platform was used that allowed pilots to fly tasks with matched motion and visual cues. Then, the platform motion was modified to give cues varying from full motion to no motion. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositionings. This refutes the view that pilots estimate altitude and altitude rate in simulation solely from visual cues. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  7. The Shuttle Mission Simulator computer generated imagery

    NASA Technical Reports Server (NTRS)

    Henderson, T. H.

    1984-01-01

    Equipment available in the primary training facility for the Space Transportation System (STS) flight crews includes the Fixed Base Simulator, the Motion Base Simulator, the Spacelab Simulator, and the Guidance and Navigation Simulator. The Shuttle Mission Simulator (SMS) consists of the Fixed Base Simulator and the Motion Base Simulator. The SMS utilizes four visual Computer Generated Image (CGI) systems. The Motion Base Simulator has a forward crew station with six-degrees of freedom motion simulation. Operation of the Spacelab Simulator is planned for the spring of 1983. The Guidance and Navigation Simulator went into operation in 1982. Aspects of orbital visual simulation are discussed, taking into account the earth scene, payload simulation, the generation and display of 1079 stars, the simulation of sun glare, and Reaction Control System jet firing plumes. Attention is also given to landing site visual simulation, and night launch and landing simulation.

  8. Validation of the Passenger Ride Quality Apparatus (PRQA) for simulation of aircraft motions for ride-quality research

    NASA Technical Reports Server (NTRS)

    Bigler, W. B., II

    1977-01-01

    The NASA passenger ride quality apparatus (PRQA), a ground based motion simulator, was compared to the total in flight simulator (TIFS). Tests were made on PRQA with varying stimuli: motions only; motions and noise; motions, noise, and visual; and motions and visual. Regression equations for the tests were obtained and subsequent t-testing of the slopes indicated that ground based simulator tests produced comfort change rates similar to actual flight data. It was recommended that PRQA be used in the ride quality program for aircraft and that it be validated for other transportation modes.

  9. Visual and motion cueing in helicopter simulation

    NASA Technical Reports Server (NTRS)

    Bray, R. S.

    1985-01-01

    Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.

  10. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  11. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  12. Visualizing the ground motions of the 1906 San Francisco earthquake

    USGS Publications Warehouse

    Chourasia, A.; Cutchin, S.; Aagaard, Brad T.

    2008-01-01

    With advances in computational capabilities and refinement of seismic wave-propagation models in the past decade large three-dimensional simulations of earthquake ground motion have become possible. The resulting datasets from these simulations are multivariate, temporal and multi-terabyte in size. Past visual representations of results from seismic studies have been largely confined to static two-dimensional maps. New visual representations provide scientists with alternate ways of viewing and interacting with these results potentially leading to new and significant insight into the physical phenomena. Visualizations can also be used for pedagogic and general dissemination purposes. We present a workflow for visual representation of the data from a ground motion simulation of the great 1906 San Francisco earthquake. We have employed state of the art animation tools for visualization of the ground motions with a high degree of accuracy and visual realism. ?? 2008 Elsevier Ltd.

  13. Effects of Spatio-Temporal Aliasing on Out-the-Window Visual Systems

    NASA Technical Reports Server (NTRS)

    Sweet, Barbara T.; Stone, Leland S.; Liston, Dorion B.; Hebert, Tim M.

    2014-01-01

    Designers of out-the-window visual systems face a challenge when attempting to simulate the outside world as viewed from a cockpit. Many methodologies have been developed and adopted to aid in the depiction of particular scene features, or levels of static image detail. However, because aircraft move, it is necessary to also consider the quality of the motion in the simulated visual scene. When motion is introduced in the simulated visual scene, perceptual artifacts can become apparent. A particular artifact related to image motion, spatiotemporal aliasing, will be addressed. The causes of spatio-temporal aliasing will be discussed, and current knowledge regarding the impact of these artifacts on both motion perception and simulator task performance will be reviewed. Methods of reducing the impact of this artifact are also addressed

  14. Integration of visual and motion cues for flight simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Investigations for the improvement of flight simulators are reported. Topics include: visual cues in landing, comparison of linear and nonlinear washout filters using a model of the vestibular system, and visual vestibular interactions (yaw axis). An abstract is given for a thesis on the applications of human dynamic orientation models to motion simulation.

  15. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  16. Relationship Between Optimal Gain and Coherence Zone in Flight Simulation

    NASA Technical Reports Server (NTRS)

    Gracio, Bruno Jorge Correia; Pais, Ana Rita Valente; vanPaassen, M. M.; Mulder, Max; Kely, Lon C.; Houck, Jacob A.

    2011-01-01

    In motion simulation the inertial information generated by the motion platform is most of the times different from the visual information in the simulator displays. This occurs due to the physical limits of the motion platform. However, for small motions that are within the physical limits of the motion platform, one-to-one motion, i.e. visual information equal to inertial information, is possible. It has been shown in previous studies that one-to-one motion is often judged as too strong, causing researchers to lower the inertial amplitude. When trying to measure the optimal inertial gain for a visual amplitude, we found a zone of optimal gains instead of a single value. Such result seems related with the coherence zones that have been measured in flight simulation studies. However, the optimal gain results were never directly related with the coherence zones. In this study we investigated whether the optimal gain measurements are the same as the coherence zone measurements. We also try to infer if the results obtained from the two measurements can be used to differentiate between simulators with different configurations. An experiment was conducted at the NASA Langley Research Center which used both the Cockpit Motion Facility and the Visual Motion Simulator. The results show that the inertial gains obtained with the optimal gain are different than the ones obtained with the coherence zone measurements. The optimal gain is within the coherence zone.The point of mean optimal gain was lower and further away from the one-to-one line than the point of mean coherence. The zone width obtained for the coherence zone measurements was dependent on the visual amplitude and frequency. For the optimal gain, the zone width remained constant when the visual amplitude and frequency were varied. We found no effect of the simulator configuration in both the coherence zone and optimal gain measurements.

  17. The search for instantaneous vection: An oscillating visual prime reduces vection onset latency.

    PubMed

    Palmisano, Stephen; Riecke, Bernhard E

    2018-01-01

    Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion ("vection"). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost.

  18. The search for instantaneous vection: An oscillating visual prime reduces vection onset latency

    PubMed Central

    Riecke, Bernhard E.

    2018-01-01

    Typically it takes up to 10 seconds or more to induce a visual illusion of self-motion (“vection”). However, for this vection to be most useful in virtual reality and vehicle simulation, it needs to be induced quickly, if not immediately. This study examined whether vection onset latency could be reduced towards zero using visual display manipulations alone. In the main experiments, visual self-motion simulations were presented to observers via either a large external display or a head-mounted display (HMD). Priming observers with visually simulated viewpoint oscillation for just ten seconds before the main self-motion display was found to markedly reduce vection onset latencies (and also increase ratings of vection strength) in both experiments. As in earlier studies, incorporating this simulated viewpoint oscillation into the self-motion displays themselves was also found to improve vection. Average onset latencies were reduced from 8-9s in the no oscillating control condition to as little as 4.6 s (for external displays) or 1.7 s (for HMDs) in the combined oscillation condition (when both the visual prime and the main self-motion display were oscillating). As these display manipulations did not appear to increase the likelihood or severity of motion sickness in the current study, they could possibly be used to enhance computer generated simulation experiences and training in the future, at no additional cost. PMID:29791445

  19. Can walking motions improve visually induced rotational self-motion illusions in virtual reality?

    PubMed

    Riecke, Bernhard E; Freiberg, Jacob B; Grechkin, Timofey Y

    2015-02-04

    Illusions of self-motion (vection) can provide compelling sensations of moving through virtual environments without the need for complex motion simulators or large tracked physical walking spaces. Here we explore the interaction between biomechanical cues (stepping along a rotating circular treadmill) and visual cues (viewing simulated self-rotation) for providing stationary users a compelling sensation of rotational self-motion (circular vection). When tested individually, biomechanical and visual cues were similarly effective in eliciting self-motion illusions. However, in combination they yielded significantly more intense self-motion illusions. These findings provide the first compelling evidence that walking motions can be used to significantly enhance visually induced rotational self-motion perception in virtual environments (and vice versa) without having to provide for physical self-motion or motion platforms. This is noteworthy, as linear treadmills have been found to actually impair visually induced translational self-motion perception (Ash, Palmisano, Apthorp, & Allison, 2013). Given the predominant focus on linear walking interfaces for virtual-reality locomotion, our findings suggest that investigating circular and curvilinear walking interfaces offers a promising direction for future research and development and can help to enhance self-motion illusions, presence and immersion in virtual-reality systems. © 2015 ARVO.

  20. A review of flight simulation techniques

    NASA Astrophysics Data System (ADS)

    Baarspul, Max

    After a brief historical review of the evolution of flight simulation techniques, this paper first deals with the main areas of flight simulator applications. Next, it describes the main components of a piloted flight simulator. Because of the presence of the pilot-in-the-loop, the digital computer driving the simulator must solve the aircraft equations of motion in ‘real-time’. Solutions to meet the high required computer power of todays modern flight simulator are elaborated. The physical similarity between aircraft and simulator in cockpit layout, flight instruments, flying controls etc., is discussed, based on the equipment and environmental cue fidelity required for training and research simulators. Visual systems play an increasingly important role in piloted flight simulation. The visual systems now available and most widely used are described, where image generators and display devices will be distinguished. The characteristics of out-of-the-window visual simulation systems pertaining to the perceptual capabilities of human vision are discussed. Faithful reproduction of aircraft motion requires large travel, velocity and acceleration capabilities of the motion system. Different types and applications of motion systems in e.g. airline training and research are described. The principles of motion cue generation, based on the characteristics of the non-visual human motion sensors, are described. The complete motion system, consisting of the hardware and the motion drive software, is discussed. The principles of mathematical modelling of the aerodynamic, flight control, propulsion, landing gear and environmental characteristics of the aircraft are reviewed. An example of the identification of an aircraft mathematical model, based on flight and taxi tests, is presented. Finally, the paper deals with the hardware and software integration of the flight simulator components and the testing and acceptance of the complete flight simulator. Examples of the so-called ‘Computer Generated Checkout’ and ‘Proof of Match’ are presented. The concluding remarks briefly summarize the status of flight simulator technology and consider possibilities for future research.

  1. Conceptual design study of a visual system for a rotorcraft simulator and some advances in platform motion utilization

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.

    1980-01-01

    A conceptual design of a visual system for a rotorcraft flight simulator is presented. Also, drive logic elements for a coupled motion base for such a simulator are given. The design is the result of an assessment of many potential arrangements of electro-optical elements and is a concept considered feasible for the application. The motion drive elements represent an example logic for a coupled motion base and is essentially an appeal to the designers of such logic to combine their washout and braking functions.

  2. Visualization of 3D elbow kinematics using reconstructed bony surfaces

    NASA Astrophysics Data System (ADS)

    Lalone, Emily A.; McDonald, Colin P.; Ferreira, Louis M.; Peters, Terry M.; King, Graham J. W.; Johnson, James A.

    2010-02-01

    An approach for direct visualization of continuous three-dimensional elbow kinematics using reconstructed surfaces has been developed. Simulation of valgus motion was achieved in five cadaveric specimens using an upper arm simulator. Direct visualization of the motion of the ulna and humerus at the ulnohumeral joint was obtained using a contact based registration technique. Employing fiducial markers, the rendered humerus and ulna were positioned according to the simulated motion. The specific aim of this study was to investigate the effect of radial head arthroplasty on restoring elbow joint stability after radial head excision. The position of the ulna and humerus was visualized for the intact elbow and following radial head excision and replacement. Visualization of the registered humerus/ulna indicated an increase in valgus angulation of the ulna with respect to the humerus after radial head excision. This increase in valgus angulation was restored to that of an elbow with a native radial head following radial head arthroplasty. These findings were consistent with previous studies investigating elbow joint stability following radial head excision and arthroplasty. The current technique was able to visualize a change in ulnar position in a single DoF. Using this approach, the coupled motion of ulna undergoing motion in all 6 degrees-of-freedom can also be visualized.

  3. Anticipating the effects of visual gravity during simulated self-motion: estimates of time-to-passage along vertical and horizontal paths.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Lacquaniti, Francesco

    2013-09-01

    By simulating self-motion on a virtual rollercoaster, we investigated whether acceleration cued by the optic flow affected the estimate of time-to-passage (TTP) to a target. In particular, we studied the role of a visual acceleration (1 g = 9.8 m/s(2)) simulating the effects of gravity in the scene, by manipulating motion law (accelerated or decelerated at 1 g, constant speed) and motion orientation (vertical, horizontal). Thus, 1-g-accelerated motion in the downward direction or decelerated motion in the upward direction was congruent with the effects of visual gravity. We found that acceleration (positive or negative) is taken into account but is overestimated in module in the calculation of TTP, independently of orientation. In addition, participants signaled TTP earlier when the rollercoaster accelerated downward at 1 g (as during free fall), with respect to when the same acceleration occurred along the horizontal orientation. This time shift indicates an influence of the orientation relative to visual gravity on response timing that could be attributed to the anticipation of the effects of visual gravity on self-motion along the vertical, but not the horizontal orientation. Finally, precision in TTP estimates was higher during vertical fall than when traveling at constant speed along the vertical orientation, consistent with a higher noise in TTP estimates when the motion violates gravity constraints.

  4. A model for the pilot's use of motion cues in roll-axis tracking tasks

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Junker, A. M.

    1977-01-01

    Simulated target-following and disturbance-regulation tasks were explored with subjects using visual-only and combined visual and motion cues. The effects of motion cues on task performance and pilot response behavior were appreciably different for the two task configurations and were consistent with data reported in earlier studies for similar task configurations. The optimal-control model for pilot/vehicle systems provided a task-independent framework for accounting for the pilot's use of motion cues. Specifically, the availability of motion cues was modeled by augmenting the set of perceptual variables to include position, rate, acceleration, and accleration-rate of the motion simulator, and results were consistent with the hypothesis of attention-sharing between visual and motion variables. This straightforward informational model allowed accurate model predictions of the effects of motion cues on a variety of response measures for both the target-following and disturbance-regulation tasks.

  5. Visual Occlusion Decreases Motion Sickness in a Flight Simulator.

    PubMed

    Ishak, Shaziela; Bubka, Andrea; Bonato, Frederick

    2018-05-01

    Sensory conflict theories of motion sickness (MS) assert that symptoms may result when incoming sensory inputs (e.g., visual and vestibular) contradict each other. Logic suggests that attenuating input from one sense may reduce conflict and hence lessen MS symptoms. In the current study, it was hypothesized that attenuating visual input by blocking light entering the eye would reduce MS symptoms in a motion provocative environment. Participants sat inside an aircraft cockpit mounted onto a motion platform that simultaneously pitched, rolled, and heaved in two conditions. In the occluded condition, participants wore "blackout" goggles and closed their eyes to block light. In the control condition, participants opened their eyes and had full view of the cockpit's interior. Participants completed separate Simulator Sickness Questionnaires before and after each condition. The posttreatment total Simulator Sickness Questionnaires and subscores for nausea, oculomotor, and disorientation in the control condition were significantly higher than those in the occluded condition. These results suggest that under some conditions attenuating visual input may delay the onset of MS or weaken the severity of symptoms. Eliminating visual input may reduce visual/nonvisual sensory conflict by weakening the influence of the visual channel, which is consistent with the sensory conflict theory of MS.

  6. The effect of visual-motion time delays on pilot performance in a pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1976-01-01

    A study has been made to determine the effect of visual-motion time delays on pilot performance of a simulated pursuit tracking task. Three interrelated major effects have been identified: task difficulty, motion cues, and time delays. As task difficulty, as determined by airplane handling qualities or target frequency, increases, the amount of acceptable time delay decreases. However, when relatively complete motion cues are included in the simulation, the pilot can maintain his performance for considerably longer time delays. In addition, the number of degrees of freedom of motion employed is a significant factor.

  7. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  8. Helicopter Flight Simulation Motion Platform Requirements

    NASA Technical Reports Server (NTRS)

    Schroeder, Jeffery Allyn

    1999-01-01

    To determine motion fidelity requirements, a series of piloted simulations was performed. Several key results were found. First, lateral and vertical translational platform cues had significant effects on fidelity. Their presence improved performance and reduced pilot workload. Second, yaw and roll rotational platform cues were not as important as the translational platform cues. In particular, the yaw rotational motion platform cue did not appear at all useful in improving performance or reducing workload. Third, when the lateral translational platform cue was combined with visual yaw rotational cues, pilots believed the platform was rotating when it was not. Thus, simulator systems can be made more efficient by proper combination of platform and visual cues. Fourth, motion fidelity specifications were revised that now provide simulator users with a better prediction of motion fidelity based upon the frequency responses of their motion control laws. Fifth, vertical platform motion affected pilot estimates of steady-state altitude during altitude repositioning. Finally, the combined results led to a general method for configuring helicopter motion systems and for developing simulator tasks that more likely represent actual flight. The overall results can serve as a guide to future simulator designers and to today's operators.

  9. Future directions in flight simulation: A user perspective

    NASA Technical Reports Server (NTRS)

    Jackson, Bruce

    1993-01-01

    Langley Research Center was an early leader in simulation technology, including a special emphasis in space vehicle simulations such as the rendezvous and docking simulator for the Gemini program and the lunar landing simulator used before Apollo. In more recent times, Langley operated the first synergistic six degree of freedom motion platform (the Visual Motion Simulator, or VMS) and developed the first dual-dome air combat simulator, the Differential Maneuvering Simulator (DMS). Each Langley simulator was developed more or less independently from one another with different programming support. At present time, the various simulation cockpits, while supported by the same host computer system, run dissimilar software. The majority of recent investments in Langley's simulation facilities have been hardware procurements: host processors, visual systems, and most recently, an improved motion system. Investments in software improvements, however, have not been of the same order.

  10. Driving with visual field loss : an exploratory simulation study

    DOT National Transportation Integrated Search

    2009-01-01

    The goal of this study was to identify the influence of peripheral visual field loss (VFL) on driving performance in a motion-based driving simulator. Sixteen drivers (6 with VFL and 10 with normal visual fields) completed a 14 km simulated drive. Th...

  11. Coherent modulation of stimulus colour can affect visually induced self-motion perception.

    PubMed

    Nakamura, Shinji; Seno, Takeharu; Ito, Hiroyuki; Sunaga, Shoji

    2010-01-01

    The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.

  12. Comparison of Flight Simulators Based on Human Motion Perception Metrics

    NASA Technical Reports Server (NTRS)

    Valente Pais, Ana R.; Correia Gracio, Bruno J.; Kelly, Lon C.; Houck, Jacob A.

    2015-01-01

    In flight simulation, motion filters are used to transform aircraft motion into simulator motion. When looking for the best match between visual and inertial amplitude in a simulator, researchers have found that there is a range of inertial amplitudes, rather than a single inertial value, that is perceived by subjects as optimal. This zone, hereafter referred to as the optimal zone, seems to correlate to the perceptual coherence zones measured in flight simulators. However, no studies were found in which these two zones were compared. This study investigates the relation between the optimal and the coherence zone measurements within and between different simulators. Results show that for the sway axis, the optimal zone lies within the lower part of the coherence zone. In addition, it was found that, whereas the width of the coherence zone depends on the visual amplitude and frequency, the width of the optimal zone remains constant.

  13. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. The effect of visual-motion time-delays on pilot performance in a simulated pursuit tracking task

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1977-01-01

    An experimental study was made to determine the effect on pilot performance of time delays in the visual and motion feedback loops of a simulated pursuit tracking task. Three major interrelated factors were identified: task difficulty either in the form of airplane handling qualities or target frequency, the amount and type of motion cues, and time delay itself. In general, the greater the task difficulty, the smaller the time delay that could exist without degrading pilot performance. Conversely, the greater the motion fidelity, the greater the time delay that could be tolerated. The effect of motion was, however, pilot dependent.

  15. Experimental and Analytic Evaluation of the Effects of Visual and Motion Simulation in SH-3 Helicopter Training. Technical Report 85-002.

    ERIC Educational Resources Information Center

    Pfeiffer, Mark G.; Scott, Paul G.

    A fly-only group (N=16) of Navy replacement pilots undergoing fleet readiness training in the SH-3 helicopter was compared with groups pre-trained on Device 2F64C with: (1) visual only (N=13); (2) no visual/no motion (N=14); and (3) one visual plus motion group (N=19). Groups were compared for their SH-3 helicopter performance in the transition…

  16. Visual-Vestibular Conflict Detection Depends on Fixation.

    PubMed

    Garzorz, Isabelle T; MacNeilage, Paul R

    2017-09-25

    Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. A novel role for visual perspective cues in the neural computation of depth.

    PubMed

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  18. The Effects of Various Fidelity Factors on Simulated Helicopter Hover

    DTIC Science & Technology

    1981-01-01

    18 VISUAL DISPLAY ....... ....................... ... 20 §. AUDITORY CUES ........... ........................ 23 • SHIP MOTION MODEL...and DiCarlo, 1974), the evaluation of visual, auditory , and motion cues for helicopter simulation (Parrish, Houck, and Martin, 1977), and the...supply the cue. As the tilt should be supplied subliminally , a forward/aft translation must be used to cue the acceleration’s onset. If only rotation

  19. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  20. A novel role for visual perspective cues in the neural computation of depth

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.

    2014-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667

  1. Visually guided control of movement in the context of multimodal stimulation

    NASA Technical Reports Server (NTRS)

    Riccio, Gary E.

    1991-01-01

    Flight simulation has been almost exclusively concerned with simulating the motions of the aircraft. Physically distinct subsystems are often combined to simulate the varieties of aircraft motion. Visual display systems simulate the motion of the aircraft relative to remote objects and surfaces (e.g., other aircraft and the terrain). 'Motion platform' simulators recreate aircraft motion relative to the gravitoinertial vector (i.e., correlated rotation and tilt as opposed to the 'coordinated turn' in flight). 'Control loaders' attempt to simulate the resistance of the aerodynamic medium to aircraft motion. However, there are few operational systems that attempt to simulate the motion of the pilot relative to the aircraft and the gravitoinertial vector. The design and use of all simulators is limited by poor understanding of postural control in the aircraft and its effect on the perception and control of flight. Analysis of the perception and control of flight (real or simulated) must consider that: (1) the pilot is not rigidly attached to the aircraft; and (2) the pilot actively monitors and adjusts body orientation and configuration in the aircraft. It is argued that this more complete approach to flight simulation requires that multimodal perception be considered as the rule rather than the exception. Moreover, the necessity of multimodal perception is revealed by emphasizing the complementarity rather than the redundancy among perceptual systems. Finally, an outline is presented for an experiment to be conducted at NASA ARC. The experiment explicitly considers possible consequences of coordination between postural and vehicular control.

  2. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2013-09-01

    right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant

  3. Simulator Sickness During Emergency Procedures Training in a Helicopter Simulator: Age, Flight Experience, and Amount Learned

    DTIC Science & Technology

    2007-09-01

    Aircrew Training Research Division, Human Resources Directorate. Smart, L. J ., Stoffregen, T. A ., & Bardy , B. G. (2002). Visually induced motion sickness...Aviation, Space, and Environmental Medicine, 60, 1043-1048. Benson, A . J . (1978). Motion sickness. In G. Dhenin & J . Ernsting (Eds.), Aviation Medicine...pp. 468-493). London: Tri-Med Books. Benson, A . J . (1988). Aetiological factors in simulator sickness. In AGARD, Motion cues in flight simulation and

  4. Simulation and visualization of face seal motion stability by means of computer generated movies

    NASA Technical Reports Server (NTRS)

    Etsion, I.; Auer, B. M.

    1980-01-01

    A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.

  5. Simulation and visualization of face seal motion stability by means of computer generated movies

    NASA Technical Reports Server (NTRS)

    Etsion, I.; Auer, B. M.

    1981-01-01

    A computer aided design method for mechanical face seals is described. Based on computer simulation, the actual motion of the flexibly mounted element of the seal can be visualized. This is achieved by solving the equations of motion of this element, calculating the displacements in its various degrees of freedom vs. time, and displaying the transient behavior in the form of a motion picture. Incorporating such a method in the design phase allows one to detect instabilities and to correct undesirable behavior of the seal. A theoretical background is presented. Details of the motion display technique are described, and the usefulness of the method is demonstrated by an example of a noncontacting conical face seal.

  6. Empirical comparison of a fixed-base and a moving-base simulation of a helicopter engaged in visually conducted slalom runs

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Houck, J. A.; Martin, D. J., Jr.

    1977-01-01

    Combined visual, motion, and aural cues for a helicopter engaged in visually conducted slalom runs at low altitude were studied. The evaluation of the visual and aural cues was subjective, whereas the motion cues were evaluated both subjectively and objectively. Subjective and objective results coincided in the area of control activity. Generally, less control activity is present under motion conditions than under fixed-base conditions, a fact attributed subjectively to the feeling of realistic limitations of a machine (helicopter) given by the addition of motion cues. The objective data also revealed that the slalom runs were conducted at significantly higher altitudes under motion conditions than under fixed-base conditions.

  7. Vestibular-visual interactions in flight simulators

    NASA Technical Reports Server (NTRS)

    Clark, B.

    1977-01-01

    The following research work is reported: (1) vestibular-visual interactions; (2) flight management and crew system interactions; (3) peripheral cue utilization in simulation technology; (4) control of signs and symptoms of motion sickness; (5) auditory cue utilization in flight simulators, and (6) vestibular function: Animal experiments.

  8. The determination of some requirements for a helicopter flight research simulation facility

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.

    1977-01-01

    Important requirements were defined for a flight simulation facility to support Army helicopter development. In particular requirements associated with the visual and motion subsystems of the planned simulator were studied. The method used in the motion requirements study is presented together with the underlying assumptions and a description of the supporting data. Results are given in a form suitable for use in a preliminary design. Visual requirements associated with a television camera/model concept are related. The important parameters are described together with substantiating data and assumptions. Research recommendations are given.

  9. Analytical evaluation of two motion washout techniques

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1977-01-01

    Practical tools were developed which extend the state of the art of moving base flight simulation for research and training purposes. The use of visual and vestibular cues to minimize the actual motion of the simulator itself was a primary consideration. The investigation consisted of optimum programming of motion cues based on a physiological model of the vestibular system to yield 'ideal washout logic' for any given simulator constraints.

  10. Human comfort response to random motions with a dominant vertical motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with vertical acceleration inputs with various power spectra shapes and magnitudes. The data obtained are presented.

  11. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  12. Simulator study of the effect of visual-motion time delays on pilot tracking performance with an audio side task

    NASA Technical Reports Server (NTRS)

    Riley, D. R.; Miller, G. K., Jr.

    1978-01-01

    The effect of time delay was determined in the visual and motion cues in a flight simulator on pilot performance in tracking a target aircraft that was oscillating sinusoidally in altitude only. An audio side task was used to assure the subject was fully occupied at all times. The results indicate that, within the test grid employed, about the same acceptable time delay (250 msec) was obtained for a single aircraft (fighter type) by each of two subjects for both fixed-base and motion-base conditions. Acceptable time delay is defined as the largest amount of delay that can be inserted simultaneously into the visual and motion cues before performance degradation occurs. A statistical analysis of the data was made to establish this value of time delay. Audio side task provided quantitative data that documented the subject's work level.

  13. Analysis, simulation and visualization of 1D tapping via reduced dynamical models

    NASA Astrophysics Data System (ADS)

    Blackmore, Denis; Rosato, Anthony; Tricoche, Xavier; Urban, Kevin; Zou, Luo

    2014-04-01

    A low-dimensional center-of-mass dynamical model is devised as a simplified means of approximately predicting some important aspects of the motion of a vertical column comprised of a large number of particles subjected to gravity and periodic vertical tapping. This model is investigated first as a continuous dynamical system using analytical, simulation and visualization techniques. Then, by employing an approach analogous to that used to approximate the dynamics of a bouncing ball on an oscillating flat plate, it is modeled as a discrete dynamical system and analyzed to determine bifurcations and transitions to chaotic motion along with other properties. The predictions of the analysis are then compared-primarily qualitatively-with visualization and simulation results of the reduced continuous model, and ultimately with simulations of the complete system dynamics.

  14. Conceptual design study for an advanced cab and visual system, volume 2

    NASA Technical Reports Server (NTRS)

    Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.

    1980-01-01

    The performance, design, construction and testing requirements are defined for developing an advanced cab and visual system. The rotorcraft system integration simulator is composed of the advanced cab and visual system and the rotorcraft system motion generator, and is part of an existing simulation facility. User's applications for the simulator include rotorcraft design development, product improvement, threat assessment, and accident investigation.

  15. Modeling human pilot cue utilization with applications to simulator fidelity assessment.

    PubMed

    Zeyada, Y; Hess, R A

    2000-01-01

    An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.

  16. A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators

    NASA Technical Reports Server (NTRS)

    Zeyada, Y.; Hess, R. A.

    1999-01-01

    An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations i The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzyinference identification can be used to reflect changes in simulator fidelity for the task examined.

  17. A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators

    NASA Technical Reports Server (NTRS)

    Zeyada, Y.; Hess, R. A.

    1999-01-01

    An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to reflect changes in simulator fidelity for the task examined.

  18. Simulator certification methods and the vertical motion simulator

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.

    1981-01-01

    The vertical motion simulator (VMS) is designed to simulate a variety of experimental helicopter and STOL/VTOL aircraft as well as other kinds of aircraft with special pitch and Z axis characteristics. The VMS includes a large motion base with extensive vertical and lateral travel capabilities, a computer generated image visual system, and a high speed CDC 7600 computer system, which performs aero model calculations. Guidelines on how to measure and evaluate VMS performance were developed. A survey of simulation users was conducted to ascertain they evaluated and certified simulators for use. The results are presented.

  19. Pilot/vehicle model analysis of visual and motion cue requirements in flight simulation. [helicopter hovering

    NASA Technical Reports Server (NTRS)

    Baron, S.; Lancraft, R.; Zacharias, G.

    1980-01-01

    The optimal control model (OCM) of the human operator is used to predict the effect of simulator characteristics on pilot performance and workload. The piloting task studied is helicopter hover. Among the simulator characteristics considered were (computer generated) visual display resolution, field of view and time delay.

  20. Using flight simulators aboard ships: human side effects of an optimal scenario with smooth seas.

    PubMed

    Muth, Eric R; Lawson, Ben

    2003-05-01

    The U.S. Navy is considering placing flight simulators aboard ships. It is known that certain types of flight simulators can elicit motion adaptation syndrome (MAS), and also that certain types of ship motion can cause MAS. The goal of this study was to determine if using a flight simulator during ship motion would cause MAS, even when the simulator stimulus and the ship motion were both very mild. All participants in this study completed three conditions. Condition 1 (Sim) entailed "flying" a personal computer-based flight simulator situated on land. Condition 2 (Ship) involved riding aboard a U.S. Navy Yard Patrol boat. Condition 3 (ShipSim) entailed "flying" a personal computer-based flight simulator while riding aboard a Yard Patrol boat. Before and after each condition, participants' balance and dynamic visual acuity were assessed. After each condition, participants filled out the Nausea Profile and the Simulator Sickness Questionnaire. Following exposure to a flight simulator aboard a ship, participants reported negligible symptoms of nausea and simulator sickness. However, participants exhibited a decrease in dynamic visual acuity after exposure to the flight simulator aboard ship (T[25] = 3.61, p < 0.05). Balance results were confounded by significant learning and, therefore, not interpretable. This study suggests that flight simulators can be used aboard ship. As a minimal safety precaution, these simulators should be used according to current safety practices for land-based simulators. Optimally, these simulators should be designed to minimize MAS, located near the ship's center of rotation and used when ship motion is not provocative.

  1. Anisotropic Rotational Diffusion Studied by Nuclear Spin Relaxation and Molecular Dynamics Simulation: An Undergraduate Physical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Fuson, Michael M.

    2017-01-01

    Laboratories studying the anisotropic rotational diffusion of bromobenzene using nuclear spin relaxation and molecular dynamics simulations are described. For many undergraduates, visualizing molecular motion is challenging. Undergraduates rarely encounter laboratories that directly assess molecular motion, and so the concept remains an…

  2. Integrating a Motion Base into a CAVE Automatic Virtual Environment: Phase 1

    DTIC Science & Technology

    2001-07-01

    this, a CAVE system must perform well in the following motion-related areas: visual gaze stability, simulator sickness, realism (or face validity...and performance validity. Visual Gaze Stability Visual gaze stability, the ability to maintain eye fixation on a particular target, depends upon human...reflexes such as the vestibulo-ocular reflex (VOR) and the optokinetic nystagmus (OKN). VOR is a reflex that counter-rotates the eye relative to the

  3. Visual information transfer. 1: Assessment of specific information needs. 2: The effects of degraded motion feedback. 3: Parameters of appropriate instrument scanning behavior

    NASA Technical Reports Server (NTRS)

    Comstock, J. R., Jr.; Kirby, R. H.; Coates, G. D.

    1984-01-01

    Pilot and flight crew assessment of visually displayed information is examined as well as the effects of degraded and uncorrected motion feedback, and instrument scanning efficiency by the pilot. Computerized flight simulation and appropriate physiological measurements are used to collect data for standardization.

  4. A Visual Tool for Computer Supported Learning: The Robot Motion Planning Example

    ERIC Educational Resources Information Center

    Elnagar, Ashraf; Lulu, Leena

    2007-01-01

    We introduce an effective computer aided learning visual tool (CALVT) to teach graph-based applications. We present the robot motion planning problem as an example of such applications. The proposed tool can be used to simulate and/or further to implement practical systems in different areas of computer science such as graphics, computational…

  5. A study of the comparative effects of various means of motion cueing during a simulated compensatory tracking task

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.

    1980-01-01

    NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.

  6. Human comfort response to dominant random motions in longitudinal modes of aircraft motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1980-01-01

    The effects of random vertical and longitudinal accelerations and pitching velocity passenger ride comfort responses were examined on the NASA Langley Visual Motion Simulator. Effects of power spectral density shape were studied for motions where the peak was between 0 and 2 Hz. The subjective rating data and the physical motion data obtained are presented without interpretation or detailed analysis. There existed motions in all other degrees of freedom as well as the particular pair of longitudinal airplane motions studied. These unwanted motions, caused by the characteristics of the simulator may have introduced some interactive effects on passenger responses.

  7. Curvilinear approach to an intersection and visual detection of a collision.

    PubMed

    Berthelon, C; Mestre, D

    1993-09-01

    Visual motion perception plays a fundamental role in vehicle control. Recent studies have shown that the pattern of optical flow resulting from the observer's self-motion through a stable environment is used by the observer to accurately control his or her movements. However, little is known about the perception of another vehicle during self-motion--for instance, when a car driver approaches an intersection with traffic. In a series of experiments using visual simulations of car driving, we show that observers are able to detect the presence of a moving object during self-motion. However, the perception of the other car's trajectory appears to be strongly dependent on environmental factors, such as the presence of a road sign near the intersection or the shape of the road. These results suggest that local and global visual factors determine the perception of a car's trajectory during self-motion.

  8. Effects of motion base and g-seat cueing of simulator pilot performance

    NASA Technical Reports Server (NTRS)

    Ashworth, B. R.; Mckissick, B. T.; Parrish, R. V.

    1984-01-01

    In order to measure and analyze the effects of a motion plus g-seat cueing system, a manned-flight-simulation experiment was conducted utilizing a pursuit tracking task and an F-16 simulation model in the NASA Langley visual/motion simulator. This experiment provided the information necessary to determine whether motion and g-seat cues have an additive effect on the performance of this task. With respect to the lateral tracking error and roll-control stick force, the answer is affirmative. It is shown that presenting the two cues simultaneously caused significant reductions in lateral tracking error and that using the g-seat and motion base separately provided essentially equal reductions in the pilot's lateral tracking error.

  9. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  10. Accuracy of System Step Response Roll Magnitude Estimation from Central and Peripheral Visual Displays and Simulator Cockpit Motion

    NASA Technical Reports Server (NTRS)

    Hosman, R. J. A. W.; Vandervaart, J. C.

    1984-01-01

    An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.

  11. Design Definition Study Report. Full Crew Interaction Simulator-Laboratory Model (FCIS-LM) (Device X17B7). Volume II. Requirements.

    DTIC Science & Technology

    1978-06-01

    stimulate at-least three levels of crew function. At the most complex level, visual cues are used to discriminate the presence or activities of...limited to motion on- set cues washed out at subliminal levels.. Because of the cues they provide the driver, gunner, and commander, and the dis...motion, i.e.,which physiological receptors are affected, how they function,and how they may be stimulated by a simulator motion system. I Motion is

  12. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task

    PubMed Central

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.

    2016-01-01

    Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829

  13. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  14. Research on integration of visual and motion cues for flight simulation and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Curry, R. E.

    1977-01-01

    Vestibular perception and integration of several sensory inputs in simulation were studied. The relationship between tilt sensation induced by moving fields and those produced by actual body tilt is discussed. Linearvection studies were included and the application of the vestibular model for perception of orientation based on motion cues is presented. Other areas of examination includes visual cues in approach to landing, and a comparison of linear and nonlinear wash out filters using a model of the human vestibular system is given.

  15. Multimodal Pilot Behavior in Multi-Axis Tracking Tasks with Time-Varying Motion Cueing Gains

    NASA Technical Reports Server (NTRS)

    Zaal, P. M. T; Pool, D. M.

    2014-01-01

    In a large number of motion-base simulators, adaptive motion filters are utilized to maximize the use of the available motion envelope of the motion system. However, not much is known about how the time-varying characteristics of such adaptive filters affect pilots when performing manual aircraft control. This paper presents the results of a study investigating the effects of time-varying motion filter gains on pilot control behavior and performance. An experiment was performed in a motion-base simulator where participants performed a simultaneous roll and pitch tracking task, while the roll and/or pitch motion filter gains changed over time. Results indicate that performance increases over time with increasing motion gains. This increase is a result of a time-varying adaptation of pilots' equalization dynamics, characterized by increased visual and motion response gains and decreased visual lead time constants. Opposite trends are found for decreasing motion filter gains. Even though the trends in both controlled axes are found to be largely the same, effects are less significant in roll. In addition, results indicate minor cross-coupling effects between pitch and roll, where a cueing variation in one axis affects the behavior adopted in the other axis.

  16. Human comfort response to random motions with a dominant pitching motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1980-01-01

    The effects of random pitching velocities on passenger ride comfort response were examined on the NASA Langley Visual Motion Simulator. The effects of power spectral density shape and frequency ranges from 0 to 2 Hz were studied. The subjective rating data and the physical motion data obtained are presented. No attempt at interpretation or detailed analysis of the data is made. Motions in all degrees of freedom existed as well as the intended pitching motion, because of the characteristics of the simulator. These unwanted motions may have introduced some interactive effects on passenger responses which should be considered in any analysis of the data.

  17. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  18. Trend-Centric Motion Visualization: Designing and Applying a New Strategy for Analyzing Scientific Motion Collections.

    PubMed

    Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M; Nuckley, David; Carlis, John; Keefe, Daniel F

    2014-12-01

    In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection's trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool's effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics.

  19. New human-centered linear and nonlinear motion cueing algorithms for control of simulator motion systems

    NASA Astrophysics Data System (ADS)

    Telban, Robert J.

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. To address this, new human-centered motion cueing algorithms were developed. A revised "optimal algorithm" uses time-invariant filters developed by optimal control, incorporating human vestibular system models. The "nonlinear algorithm" is a novel approach that is also formulated by optimal control, but can also be updated in real time. It incorporates a new integrated visual-vestibular perception model that includes both visual and vestibular sensation and the interaction between the stimuli. A time-varying control law requires the matrix Riccati equation to be solved in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. As a result of unsatisfactory sensation, an augmented turbulence cue was added to the vertical mode for both the optimal and nonlinear algorithms. The relative effectiveness of the algorithms, in simulating aircraft maneuvers, was assessed with an eleven-subject piloted performance test conducted on the NASA Langley Visual Motion Simulator (VMS). Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach are less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  20. The Effectiveness of Simulator Motion in the Transfer of Performance on a Tracking Task Is Influenced by Vision and Motion Disturbance Cues.

    PubMed

    Grundy, John G; Nazar, Stefan; O'Malley, Shannon; Mohrenshildt, Martin V; Shedden, Judith M

    2016-06-01

    To examine the importance of platform motion to the transfer of performance in motion simulators. The importance of platform motion in simulators for pilot training is strongly debated. We hypothesized that the type of motion (e.g., disturbance) contributes significantly to performance differences. Participants used a joystick to perform a target tracking task in a pod on top of a MOOG Stewart motion platform. Five conditions compared training without motion, with correlated motion, with disturbance motion, with disturbance motion isolated to the visual display, and with both correlated and disturbance motion. The test condition involved the full motion model with both correlated and disturbance motion. We analyzed speed and accuracy across training and test as well as strategic differences in joystick control. Training with disturbance cues produced critical behavioral differences compared to training without disturbance; motion itself was less important. Incorporation of disturbance cues is a potentially important source of variance between studies that do or do not show a benefit of motion platforms in the transfer of performance in simulators. Potential applications of this research include the assessment of the importance of motion platforms in flight simulators, with a focus on the efficacy of incorporating disturbance cues during training. © 2016, Human Factors and Ergonomics Society.

  1. Simulated self-motion in a visual gravity field: sensitivity to vertical and horizontal heading in the human brain.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Pauwels, Karl; Macaluso, Emiliano; Orban, Guy A; Lacquaniti, Francesco

    2013-05-01

    Multiple visual signals are relevant to perception of heading direction. While the role of optic flow and depth cues has been studied extensively, little is known about the visual effects of gravity on heading perception. We used fMRI to investigate the contribution of gravity-related visual cues on the processing of vertical versus horizontal apparent self-motion. Participants experienced virtual roller-coaster rides in different scenarios, at constant speed or 1g-acceleration/deceleration. Imaging results showed that vertical self-motion coherent with gravity engaged the posterior insula and other brain regions that have been previously associated with vertical object motion under gravity. This selective pattern of activation was also found in a second experiment that included rectilinear motion in tunnels, whose direction was cued by the preceding open-air curves only. We argue that the posterior insula might perform high-order computations on visual motion patterns, combining different sensory cues and prior information about the effects of gravity. Medial-temporal regions including para-hippocampus and hippocampus were more activated by horizontal motion, preferably at constant speed, consistent with a role in inertial navigation. Overall, the results suggest partially distinct neural representations of the cardinal axes of self-motion (horizontal and vertical). Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Perceived change in orientation from optic flow in the central visual field

    NASA Technical Reports Server (NTRS)

    Dyre, Brian P.; Andersen, George J.

    1988-01-01

    The effects of internal depth within a simulation display on perceived changes in orientation have been studied. Subjects monocularly viewed displays simulating observer motion within a volume of randomly positioned points through a window which limited the field of view to 15 deg. Changes in perceived spatial orientation were measured by changes in posture. The extent of internal depth within the display, the presence or absence of visual information specifying change in orientation, and the frequency of motion supplied by the display were examined. It was found that increased sway occurred at frequencies equal to or below 0.375 Hz when motion at these frequencies was displayed. The extent of internal depth had no effect on the perception of changing orientation.

  3. Training Effectiveness of Visual and Motion Simulation

    DTIC Science & Technology

    1981-01-01

    and checkride scores. No statistical differeLes between the two groups were found. Creelman (1959) reported that students trained in theSNJ Link with...simulated and aircraft hvurs or sorsies (Dricisom a Burger, 1976; Brown. Matheny, & Fleaman. 1951; Creelman , 1959; Gray et al., 1969- Payne at al., 1976...reirtionohip between flight simulator motion and trainiag requirmumenia. Human Factors. 1979. 2). 493-50)1. Creelman , J.A. Evaluation of approach

  4. Evaluation of several secondary tasks in the determination of permissible time delays in simulator visual and motion cues

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Riley, D. R.

    1978-01-01

    The effect of secondary tasks in determining permissible time delays in visual-motion simulation of a pursuit tracking task was examined. A single subject, a single set of aircraft handling qualities, and a single motion condition in tracking a target aircraft that oscillates sinusoidally in altitude were used. In addition to the basic simulator delays the results indicate that the permissible time delay is about 250 msec for either a tapping task, an adding task, or an audio task and is approximately 125 msec less than when no secondary task is involved. The magnitudes of the primary task performance measures, however, differ only for the tapping task. A power spectraldensity analysis basically confirms the result by comparing the root-mean-square performance measures. For all three secondary tasks, the total pilot workload was quite high.

  5. Objective Assessment of Laparoscopic Force and Psychomotor Skills in a Novel Virtual Reality-Based Haptic Simulator.

    PubMed

    Prasad, M S Raghu; Manivannan, Muniyandi; Manoharan, Govindan; Chandramohan, S M

    2016-01-01

    Most of the commercially available virtual reality-based laparoscopic simulators do not effectively evaluate combined psychomotor and force-based laparoscopic skills. Consequently, the lack of training on these critical skills leads to intraoperative errors. To assess the effectiveness of the novel virtual reality-based simulator, this study analyzed the combined psychomotor (i.e., motion or movement) and force skills of residents and expert surgeons. The study also examined the effectiveness of real-time visual force feedback and tool motion during training. Bimanual fundamental (i.e., probing, pulling, sweeping, grasping, and twisting) and complex tasks (i.e., tissue dissection) were evaluated. In both tasks, visual feedback on applied force and tool motion were provided. The skills of the participants while performing the early tasks were assessed with and without visual feedback. Participants performed 5 repetitions of fundamental and complex tasks. Reaction force and instrument acceleration were used as metrics. Surgical Gastroenterology, Government Stanley Medical College and Hospital; Institute of Surgical Gastroenterology, Madras Medical College and Rajiv Gandhi Government General Hospital. Residents (N = 25; postgraduates and surgeons with <2 years of laparoscopic surgery) and expert surgeons (N = 25; surgeons with >4 and ≤10 years of laparoscopic surgery). Residents applied large forces compared with expert surgeons and performed abrupt tool movements (p < 0.001). However, visual + haptic feedback improved the performance of residents (p < 0.001). In complex tasks, visual + haptic feedback did not influence the applied force of expert surgeons, but influenced their tool motion (p < 0.001). Furthermore, in complex tissue sweeping task, expert surgeons applied more force, but were within the tissue damage limits. In both groups, exertion of large forces and abrupt tool motion were observed during grasping, probing or pulling, and tissue sweeping maneuvers (p < 0.001). Modern day curriculum-based training should evaluate the skills of residents with robust force and psychomotor-based exercises for proficient laparoscopy. Visual feedback on force and motion during training has the potential to enhance the learning curve of residents. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  6. Implementation of Motion Simulation Software and Visual-Auditory Electronics for Use in a Low Gravity Robotic Testbed

    NASA Technical Reports Server (NTRS)

    Martin, William Campbell

    2011-01-01

    The Jet Propulsion Laboratory (JPL) is developing the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) to assist in manned space missions. One of the proposed targets for this robotic vehicle is a near-Earth asteroid (NEA), which typically exhibit a surface gravity of only a few micro-g. In order to properly test ATHLETE in such an environment, the development team has constructed an inverted Stewart platform testbed that acts as a robotic motion simulator. This project focused on creating physical simulation software that is able to predict how ATHLETE will function on and around a NEA. The corresponding platform configurations are calculated and then passed to the testbed to control ATHLETE's motion. In addition, imitation attitude, imitation attitude control thrusters were designed and fabricated for use on ATHLETE. These utilize a combination of high power LEDs and audio amplifiers to provide visual and auditory cues that correspond to the physics simulation.

  7. Developments in Human Centered Cueing Algorithms for Control of Flight Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A.; Telban, Robert J.; Cardullo, Frank M.

    1997-01-01

    The authors conducted further research with cueing algorithms for control of flight simulator motion systems. A variation of the so-called optimal algorithm was formulated using simulated aircraft angular velocity input as a basis. Models of the human vestibular sensation system, i.e. the semicircular canals and otoliths, are incorporated within the algorithm. Comparisons of angular velocity cueing responses showed a significant improvement over a formulation using angular acceleration input. Results also compared favorably with the coordinated adaptive washout algorithm, yielding similar results for angular velocity cues while eliminating false cues and reducing the tilt rate for longitudinal cues. These results were confirmed in piloted tests on the current motion system at NASA-Langley, the Visual Motion Simulator (VMS). Proposed future developments by the authors in cueing algorithms are revealed. The new motion system, the Cockpit Motion Facility (CMF), where the final evaluation of the cueing algorithms will be conducted, is also described.

  8. Trend-Centric Motion Visualization: Designing and Applying a new Strategy for Analyzing Scientific Motion Collections

    PubMed Central

    Schroeder, David; Korsakov, Fedor; Knipe, Carissa Mai-Ping; Thorson, Lauren; Ellingson, Arin M.; Nuckley, David; Carlis, John; Keefe, Daniel F

    2017-01-01

    In biomechanics studies, researchers collect, via experiments or simulations, datasets with hundreds or thousands of trials, each describing the same type of motion (e.g., a neck flexion-extension exercise) but under different conditions (e.g., different patients, different disease states, pre- and post-treatment). Analyzing similarities and differences across all of the trials in these collections is a major challenge. Visualizing a single trial at a time does not work, and the typical alternative of juxtaposing multiple trials in a single visual display leads to complex, difficult-to-interpret visualizations. We address this problem via a new strategy that organizes the analysis around motion trends rather than trials. This new strategy matches the cognitive approach that scientists would like to take when analyzing motion collections. We introduce several technical innovations making trend-centric motion visualization possible. First, an algorithm detects a motion collection’s trends via time-dependent clustering. Second, a 2D graphical technique visualizes how trials leave and join trends. Third, a 3D graphical technique, using a median 3D motion plus a visual variance indicator, visualizes the biomechanics of the set of trials within each trend. These innovations are combined to create an interactive exploratory visualization tool, which we designed through an iterative process in collaboration with both domain scientists and a traditionally-trained graphic designer. We report on insights generated during this design process and demonstrate the tool’s effectiveness via a validation study with synthetic data and feedback from expert musculoskeletal biomechanics researchers who used the tool to analyze the effects of disc degeneration on human spinal kinematics. PMID:26356978

  9. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  10. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  11. Space motion sickness preflight adaptation training: preliminary studies with prototype trainers

    NASA Technical Reports Server (NTRS)

    Parker, D. E.; Rock, J. C.; von Gierke, H. E.; Ouyang, L.; Reschke, M. F.; Arrott, A. P.

    1987-01-01

    Preflight training frequently has been proposed as a potential solution to the problem of space motion sickness. The paper considers successively the otolith reinterpretation, the concept for a preflight adaptation trainer and the research with the Miami University Seesaw, the Wright Patterson Air-Force Base Dynamic Environment Simulator and the Visually Coupled Airborne Systems Simulator prototype adaptation trainers.

  12. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  13. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    PubMed

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  14. Visual cueing aids for rotorcraft landings

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W.; Andre, Anthony D.

    1993-01-01

    The present study used a rotorcraft simulator to examine descents-to-hover at landing pads with one of three approach lighting configurations. The impact of simulator platform motion upon descents to hover was also examined. The results showed that the configuration with the most useful optical information led to the slowest final approach speeds, and that pilots found this configuration, together with the presence of simulator platform motion, most desirable. The results also showed that platform motion led to higher rates of approach to the landing pad in some cases. Implications of the results for the design of vertiport approach paths are discussed.

  15. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedam, S.; Archambault, L.; Starkschall, G.

    2007-11-15

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8{+-}11% and 14{+-}21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4{+-}7% and 8{+-}15% with and without audio-visual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.« less

  16. Human comfort response to random motions with a dominant transverse motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with transverse acceleration inputs with various power spectra shapes and magnitudes. The results show only little influence of spectra shape on comfort response. The effects of magnitude on comfort response indicate the applicability of psychophysical precepts for comfort modeling.

  17. Human comfort response to random motions with a dominant longitudinal motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with longitudinal acceleration inputs with various power spectra shapes and magnitudes. The results show only little influence of spectra shape on comfort response. The effects of magnitude on comfort response indicate the applicability of psychophysical precepts for comfort modeling.

  18. Human confort response to random motions with a dominant rolling motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on a visual motion simulator with rolling velocity inputs with various power spectra shapes and magnitudes. The results show only little influence of spectra shape on comfort response. The effects of magnitude on comfort response indicate the applicability of psychophysical precepts for comfort modeling.

  19. Neural dynamics of motion processing and speed discrimination.

    PubMed

    Chey, J; Grossberg, S; Mingolla, E

    1998-09-01

    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

  20. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  1. Simulation System Fidelity Assessment at the Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Beard, Steven D.; Reardon, Scott E.; Tobias, Eric L.; Aponso, Bimal L.

    2013-01-01

    Fidelity is a word that is often used but rarely understood when talking about groundbased simulation. Assessing the cueing fidelity of a ground based flight simulator requires a comparison to actual flight data either directly or indirectly. Two experiments were conducted at the Vertical Motion Simulator using the GenHel UH-60A Black Hawk helicopter math model that was directly compared to flight data. Prior to the experiment the simulator s motion and visual system frequency responses were measured, the aircraft math model was adjusted to account for the simulator motion system delays, and the motion system gains and washouts were tuned for the individual tasks. The tuned motion system fidelity was then assessed against the modified Sinacori criteria. The first experiments showed similar handling qualities ratings (HQRs) to actual flight for a bob-up and sidestep maneuvers. The second experiment showed equivalent HQRs between flight and simulation for the ADS33 slalom maneuver for the two pilot participants. The ADS33 vertical maneuver HQRs were mixed with one pilot rating the flight and simulation the same while the second pilot rated the simulation worse. In addition to recording HQRs on the second experiment, an experimental Simulation Fidelity Rating (SFR) scale developed by the University of Liverpool was tested for applicability to engineering simulators. A discussion of the SFR scale for use on the Vertical Motion Simulator is included in this paper.

  2. Rotorcraft Research at the NASA Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Aponso, Bimal Lalith; Tran, Duc T.; Schroeder, Jeffrey A.

    2009-01-01

    In the 1970 s the role of the military helicopter evolved to encompass more demanding missions including low-level nap-of-the-earth flight and operation in severely degraded visual environments. The Vertical Motion Simulator (VMS) at the NASA Ames Research Center was built to provide a high-fidelity simulation capability to research new rotorcraft concepts and technologies that could satisfy these mission requirements. The VMS combines a high-fidelity large amplitude motion system with an adaptable simulation environment including interchangeable and configurable cockpits. In almost 30 years of operation, rotorcraft research on the VMS has contributed significantly to the knowledge-base on rotorcraft performance, handling qualities, flight control, and guidance and displays. These contributions have directly benefited current rotorcraft programs and flight safety. The high fidelity motion system in the VMS was also used to research simulation fidelity. This research provided a fundamental understanding of pilot cueing modalities and their effect on simulation fidelity.

  3. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  4. Destabilizing effects of visual environment motions simulating eye movements or head movements

    NASA Technical Reports Server (NTRS)

    White, Keith D.; Shuman, D.; Krantz, J. H.; Woods, C. B.; Kuntz, L. A.

    1991-01-01

    In the present paper, we explore effects on the human of exposure to a visual virtual environment which has been enslaved to simulate the human user's head movements or eye movements. Specifically, we have studied the capacity of our experimental subjects to maintain stable spatial orientation in the context of moving their entire visible surroundings by using the parameters of the subjects' natural movements. Our index of the subjects' spatial orientation was the extent of involuntary sways of the body while attempting to stand still, as measured by translations and rotations of the head. We also observed, informally, their symptoms of motion sickness.

  5. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  6. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  7. Visual perception of axes of head rotation

    PubMed Central

    Arnoldussen, D. M.; Goossens, J.; van den Berg, A. V.

    2013-01-01

    Registration of ego-motion is important to accurately navigate through space. Movements of the head and eye relative to space are registered through the vestibular system and optical flow, respectively. Here, we address three questions concerning the visual registration of self-rotation. (1) Eye-in-head movements provide a link between the motion signals received by sensors in the moving eye and sensors in the moving head. How are these signals combined into an ego-rotation percept? We combined optic flow of simulated forward and rotational motion of the eye with different levels of eye-in-head rotation for a stationary head. We dissociated simulated gaze rotation and head rotation by different levels of eye-in-head pursuit. We found that perceived rotation matches simulated head- not gaze-rotation. This rejects a model for perceived self-rotation that relies on the rotation of the gaze line. Rather, eye-in-head signals serve to transform the optic flow's rotation information, that specifies rotation of the scene relative to the eye, into a rotation relative to the head. This suggests that transformed visual self-rotation signals may combine with vestibular signals. (2) Do transformed visual self-rotation signals reflect the arrangement of the semi-circular canals (SCC)? Previously, we found sub-regions within MST and V6+ that respond to the speed of the simulated head rotation. Here, we re-analyzed those Blood oxygenated level-dependent (BOLD) signals for the presence of a spatial dissociation related to the axes of visually simulated head rotation, such as have been found in sub-cortical regions of various animals. Contrary, we found a rather uniform BOLD response to simulated rotation along the three SCC axes. (3) We investigated if subject's sensitivity to the direction of the head rotation axis shows SCC axes specifcity. We found that sensitivity to head rotation is rather uniformly distributed, suggesting that in human cortex, visuo-vestibular integration is not arranged into the SCC frame. PMID:23919087

  8. ANOPP/VMS HSCT ground contour system

    NASA Technical Reports Server (NTRS)

    Rawls, John, Jr.; Glaab, Lou

    1992-01-01

    This viewgraph shows the integration of the Visual Motion Simulator with ANOPP. ANOPP is an acronym for the Aircraft NOise Prediction Program. It is a computer code consisting of dedicated noise prediction modules for jet, propeller, and rotor powered aircraft along with flight support and noise propagation modules, all executed under the control of an executive system. The Visual Motion Simulator (VMS) is a ground based motion simulator with six degrees of freedom. The transport-type cockpit is equipped with conventional flight and engine-thrust controls and with flight instrument displays. Control forces on the wheel, column, and rudder pedals are provided by a hydraulic system coupled with an analog computer. The simulator provides variable-feel characteristics of stiffness, damping, coulomb friction, breakout forces, and inertia. The VMS provides a wide range of realistic flight trajectories necessary for computing accurate ground contours. The NASA VMS will be discussed in detail later in this presentation. An equally important part of the system for both ANOPP and VMS is the engine performance. This will also be discussed in the presentation.

  9. Visual preference for isochronic movement does not necessarily emerge from movement kinematics: a challenge for the motor simulation theory.

    PubMed

    Bidet-Ildei, Christel; Méary, David; Orliaguet, Jean-Pierre

    2008-01-17

    The aim of this experiment was to show that the visual preference for isochronic movements does not necessarily imply a motor simulation and therefore, does not depend on the kinematics of the perceived movement. To demonstrate this point, the participants' task was to adjust the velocity (the period) of a dot that depicted an elliptic motion with different perimeters (from 3 to 60 cm). The velocity profile of the movement conformed ("natural motions") or not ("unnatural motions") to the law of co-variation velocity-curvature (two-thirds power law), which is usually observed in the production of elliptic movements. For each condition, we evaluated the isochrony principle, i.e., the tendency to prefer constant durations of movement irrespective to changes in the trajectory perimeter. Our findings indicate that isochrony principle was observed whatever the kinematics of the movement (natural or unnatural). Therefore, they suggest that the perceptive preference for isochronic movements does not systematically imply a motor simulation.

  10. Effects of Different Heave Motion Components on Pilot Pitch Control Behavior

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Zavala, Melinda A.

    2016-01-01

    The study described in this paper had two objectives. The first objective was to investigate if a different weighting of heave motion components decomposed at the center of gravity, allowing for a higher fidelity of individual components, would result in pilot manual pitch control behavior and performance closer to that observed with full aircraft motion. The second objective was to investigate if decomposing the heave components at the aircraft's instantaneous center of rotation rather than at the center of gravity could result in additional improvements in heave motion fidelity. Twenty-one general aviation pilots performed a pitch attitude control task in an experiment conducted on the Vertical Motion Simulator at NASA Ames under different hexapod motion conditions. The large motion capability of the Vertical Motion Simulator also allowed for a full aircraft motion condition, which served as a baseline. The controlled dynamics were of a transport category aircraft trimmed close to the stall point. When the ratio of center of gravity pitch heave to center of gravity heave increased in the hexapod motion conditions, pilot manual control behavior and performance became increasingly more similar to what is observed with full aircraft motion. Pilot visual and motion gains significantly increased, while the visual lead time constant decreased. The pilot visual and motion time delays remained approximately constant and decreased, respectively. The neuromuscular damping and frequency both decreased, with their values more similar to what is observed with real aircraft motion when there was an equal weighting of the heave of the center of gravity and heave due to rotations about the center of gravity. In terms of open- loop performance, the disturbance and target crossover frequency increased and decreased, respectively, and their corresponding phase margins remained constant and increased, respectively. The decomposition point of the heave components only had limited effects on pilot manual control behavior and performance.

  11. SOCR "Motion Charts": An Efficient, Open-Source, Interactive and Dynamic Applet for Visualizing Longitudinal Multivariate Data

    ERIC Educational Resources Information Center

    Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.

    2010-01-01

    The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality…

  12. Comparison of the visual perception of a runway model in pilots and nonpilots during simulated night landing approaches.

    DOT National Transportation Integrated Search

    1978-03-01

    At night, reduced visual cues may promote illusions and a dangerous tendency for pilots to fly low during approaches to landing. Relative motion parallax (a difference in rate of apparent movement of objects in the visual field), a cue that can contr...

  13. Enhanced Ultrasound Visualization of Bracytherapy Seeds by a Novel Magnetically Induced Motion Imaging Method

    DTIC Science & Technology

    2008-04-01

    We report our progress in developing Magnetically Induced Motion Imaging (MIMI) for unambiguous identification and localization brachytherapy seeds ...in ultrasound images. Coupled finite element and ultrasound imaging simulations have been performed to demonstrate that seeds are detectable with MIMI

  14. An investigation of motion base cueing and G-seat cueing on pilot performance in a simulator

    NASA Technical Reports Server (NTRS)

    Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.

    1983-01-01

    The effect of G-seat cueing (GSC) and motion-base cueing (MBC) on performance of a pursuit-tracking task is studied using the visual motion simulator (VMS) at Langley Research Center. The G-seat, the six-degree-of-freedom synergistic platform motion system, the visual display, the cockpit hardware, and the F-16 aircraft mathematical model are characterized. Each of 8 active F-15 pilots performed the 2-min-43-sec task 10 times for each experimental mode: no cue, GSC, MBC, and GSC + MBC; the results were analyzed statistically in terms of the RMS values of vertical and lateral tracking error. It is shown that lateral error is significantly reduced by either GSC or MBC, and that the combination of cues produces a further, significant decrease. Vertical error is significantly decreased by GSC with or without MBC, whereas MBC effects vary for different pilots. The pattern of these findings is roughly duplicated in measurements of stick force applied for roll and pitch correction.

  15. Comparisons of Kinematics and Dynamics Simulation Software Tools

    NASA Technical Reports Server (NTRS)

    Shiue, Yeu-Sheng Paul

    2002-01-01

    Kinematic and dynamic analyses for moving bodies are essential to system engineers and designers in the process of design and validations. 3D visualization and motion simulation plus finite element analysis (FEA) give engineers a better way to present ideas and results. Marshall Space Flight Center (MSFC) system engineering researchers are currently using IGRIP from DELMIA Inc. as a kinematic simulation tool for discrete bodies motion simulations. Although IGRIP is an excellent tool for kinematic simulation with some dynamic analysis capabilities in robotic control, explorations of other alternatives with more powerful dynamic analysis and FEA capabilities are necessary. Kinematics analysis will only examine the displacement, velocity, and acceleration of the mechanism without considering effects from masses of components. With dynamic analysis and FEA, effects such as the forces or torques at the joint due to mass and inertia of components can be identified. With keen market competition, ALGOR Mechanical Event Simulation (MES), MSC visualNastran 4D, Unigraphics Motion+, and Pro/MECHANICA were chosen for explorations. In this study, comparisons between software tools were presented in terms of following categories: graphical user interface (GUI), import capability, tutorial availability, ease of use, kinematic simulation capability, dynamic simulation capability, FEA capability, graphical output, technical support, and cost. Propulsion Test Article (PTA) with Fastrac engine model exported from IGRIP and an office chair mechanism were used as examples for simulations.

  16. Rotary acceleration of a subject inhibits choice reaction time to motion in peripheral vision

    NASA Technical Reports Server (NTRS)

    Borkenhagen, J. M.

    1974-01-01

    Twelve pilots were tested in a rotation device with visual simulation, alone and in combination with rotary stimulation, in experiments with variable levels of acceleration and variable viewing angles, in a study of the effect of S's rotary acceleration on the choice reaction time for an accelerating target in peripheral vision. The pilots responded to the direction of the visual motion by moving a hand controller to the right or left. Visual-plus-rotary stimulation required a longer choice reaction time, which was inversely related to the level of acceleration and directly proportional to the viewing angle.

  17. Virgil Gus Grissom's Visit to LaRC

    NASA Image and Video Library

    1963-02-22

    Astronaut Virgil "Gus" Grissom at the controls of the Visual Docking Simulator. From A.W. Vogeley, "Piloted Space-Flight Simulation at Langley Research Center," Paper presented at the American Society of Mechanical Engineers 1966 Winter Meeting, New York, NY, November 27-December 1, 1966. "This facility was [later known as the Visual-Optical Simulator.] It presents to the pilot an out-the-window view of his target in correct 6 degrees of freedom motion. The scene is obtained by a television camera pick-up viewing a small-scale gimbaled model of the target." "For docking studies, the docking target picture was projected onto the surface of a 20-foot-diameter sphere and the pilot could, effectively, maneuver into contract. this facility was used in a comparison study with the Rendezvous Docking Simulator - one of the few comparison experiments in which conditions were carefully controlled and a reasonable sample of pilots used. All pilots preferred the more realistic RDS visual scene. The pilots generally liked the RDS angular motion cues although some objected to the false gravity cues that these motions introduced. Training time was shorter on the RDS, but final performance on both simulators was essentially equal. " "For station-keeping studies, since close approach is not required, the target was presented to the pilot through a virtual-image system which projects his view to infinity, providing a more realistic effect. In addition to the target, the system also projects a star and horizon background. "

  18. Breaking cover: neural responses to slow and fast camouflage-breaking motion.

    PubMed

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M; McLoughlin, Niall; Wang, Wei

    2015-08-22

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. © 2015 The Authors.

  19. Breaking cover: neural responses to slow and fast camouflage-breaking motion

    PubMed Central

    Yin, Jiapeng; Gong, Hongliang; An, Xu; Chen, Zheyuan; Lu, Yiliang; Andolina, Ian M.; McLoughlin, Niall; Wang, Wei

    2015-01-01

    Primates need to detect and recognize camouflaged animals in natural environments. Camouflage-breaking movements are often the only visual cue available to accomplish this. Specifically, sudden movements are often detected before full recognition of the camouflaged animal is made, suggesting that initial processing of motion precedes the recognition of motion-defined contours or shapes. What are the neuronal mechanisms underlying this initial processing of camouflaged motion in the primate visual brain? We investigated this question using intrinsic-signal optical imaging of macaque V1, V2 and V4, along with computer simulations of the neural population responses. We found that camouflaged motion at low speed was processed as a direction signal by both direction- and orientation-selective neurons, whereas at high-speed camouflaged motion was encoded as a motion-streak signal primarily by orientation-selective neurons. No population responses were found to be invariant to the camouflage contours. These results suggest that the initial processing of camouflaged motion at low and high speeds is encoded as direction and motion-streak signals in primate early visual cortices. These processes are consistent with a spatio-temporal filter mechanism that provides for fast processing of motion signals, prior to full recognition of camouflage-breaking animals. PMID:26269500

  20. The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices

    PubMed Central

    An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei

    2014-01-01

    All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033

  1. Translation and Rotation Trade Off in Human Visual Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    We have previously shown that, during simulated curvilinear motion, humans can make reasonably accurate and precise heading judgments from optic flow without either oculomotor or static-depth cues about rotation. We now systematically investigate the effect of varying the parameters of self-motion. We visually simulated 400 ms of self-motion along curved paths (constant rotation and translation rates, fixed retinocentric heading) towards two planes of random dots at 10.3 m and 22.3 m at mid-trial. Retinocentric heading judgments of 4 observers (2 naive) were measured for 12 different combinations of translation (T between 4 and 16 m/s) and rotation (R either 8 or 16 deg/s). In the range tested, heading bias and uncertainty decrease quasilinearly with T/R, but the bias also appears to depend on R. If depth is held constant, the ratio T/R can account for much of the variation in the accuracy and precision of human visual heading estimation, although further experiments are needed to resolve whether absolute rotation rate, total flow rate, or some other factor can account for the observed -2 deg shift between the bias curves.

  2. A Document Visualization Tool Customized to Explore DRDC Reports (Un outil de visualisation de document concu precisement pour explorer les rapports de RDDC)

    DTIC Science & Technology

    2011-08-01

    context of flight simulators . ................................................................................................................... 14...particular area? Suppose a commander at CFB Shearwater wanted to find out more about how he/she can best deal with issues of pilots’ motion sickness...in the flight simulator on base. As a first step, one would enter, “motion sickness” as a query in HanDles, and get the relevant documents returned

  3. Effects of aging on perception of motion

    NASA Astrophysics Data System (ADS)

    Kaur, Manpreet; Wilder, Joseph; Hung, George; Julesz, Bela

    1997-09-01

    Driving requires two basic visual components: 'visual sensory function' and 'higher order skills.' Among the elderly, it has been observed that when attention must be divided in the presence of multiple objects, their attentional skills and relational processes, along with impairment of basic visual sensory function, are markedly impaired. A high frame rate imaging system was developed to assess the elderly driver's ability to locate and distinguish computer generated images of vehicles and to determine their direction of motion in a simulated intersection. Preliminary experiments were performed at varying target speeds and angular displacements to study the effect of these parameters on motion perception. Results for subjects in four different age groups, ranging from mid- twenties to mid-sixties, show significantly better performance for the younger subjects as compared to the older ones.

  4. Correction of respiratory motion for IMRT using aperture adaptive technique and visual guidance: A feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi

    2007-07-01

    Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.

  5. The role of the research simulator in the systems development of rotorcraft

    NASA Technical Reports Server (NTRS)

    Statler, I. C.; Deel, A.

    1981-01-01

    The potential application of the research simulator to future rotorcraft systems design, development, product improvement evaluations, and safety analysis is examined. Current simulation capabilities for fixed-wing aircraft are reviewed and the requirements of a rotorcraft simulator are defined. The visual system components, vertical motion simulator, cab, and computation system for a research simulator under development are described.

  6. SOCR Motion Charts: An Efficient, Open-Source, Interactive and Dynamic Applet for Visualizing Longitudinal Multivariate Data

    PubMed Central

    Al-Aziz, Jameel; Christou, Nicolas; Dinov, Ivo D.

    2011-01-01

    The amount, complexity and provenance of data have dramatically increased in the past five years. Visualization of observed and simulated data is a critical component of any social, environmental, biomedical or scientific quest. Dynamic, exploratory and interactive visualization of multivariate data, without preprocessing by dimensionality reduction, remains a nearly insurmountable challenge. The Statistics Online Computational Resource (www.SOCR.ucla.edu) provides portable online aids for probability and statistics education, technology-based instruction and statistical computing. We have developed a new Java-based infrastructure, SOCR Motion Charts, for discovery-based exploratory analysis of multivariate data. This interactive data visualization tool enables the visualization of high-dimensional longitudinal data. SOCR Motion Charts allows mapping of ordinal, nominal and quantitative variables onto time, 2D axes, size, colors, glyphs and appearance characteristics, which facilitates the interactive display of multidimensional data. We validated this new visualization paradigm using several publicly available multivariate datasets including Ice-Thickness, Housing Prices, Consumer Price Index, and California Ozone Data. SOCR Motion Charts is designed using object-oriented programming, implemented as a Java Web-applet and is available to the entire community on the web at www.socr.ucla.edu/SOCR_MotionCharts. It can be used as an instructional tool for rendering and interrogating high-dimensional data in the classroom, as well as a research tool for exploratory data analysis. PMID:21479108

  7. Simulation validation of the XV-15 tilt-rotor research aircraft

    NASA Technical Reports Server (NTRS)

    Ferguson, S. W.; Hanson, G. D.; Churchill, G. B.

    1984-01-01

    The results of a simulation validation program of the XV-15 tilt-rotor research aircraft are detailed, covering such simulation aspects as the mathematical model, visual system, motion system, cab aural system, cab control loader system, pilot perceptual fidelity, and generic tilt rotor applications. Simulation validation was performed for the hover, low-speed, and sideward flight modes, with consideration of the in-ground rotor effect. Several deficiencies of the mathematical model and the simulation systems were identified in the course of the simulation validation project, and some were corrected. It is noted that NASA's Vertical Motion Simulator used in the program is an excellent tool for tilt-rotor and rotorcraft design, development, and pilot training.

  8. Multiple Teaching Approaches, Teaching Sequence and Concept Retention in High School Physics Education

    ERIC Educational Resources Information Center

    Fogarty, Ian; Geelan, David

    2013-01-01

    Students in 4 Canadian high school physics classes completed instructional sequences in two key physics topics related to motion--Straight Line Motion and Newton's First Law. Different sequences of laboratory investigation, teacher explanation (lecture) and the use of computer-based scientific visualizations (animations and simulations) were…

  9. Laser spectroscopic visualization of hydrogen bond motions in liquid water

    NASA Astrophysics Data System (ADS)

    Bratos, S.; Leicknam, J.-Cl.; Pommeret, S.; Gallot, G.

    2004-12-01

    Ultrafast pump-probe experiments are described permitting a visualization of molecular motions in diluted HDO/D 2O solutions. The experiments were realized in the mid-infrared spectral region with a time resolution of 150 fs. They were interpreted by a careful theoretical analysis, based on the correlation function approach of statistical mechanics. Combining experiment and theory, stretching motions of the OH⋯O bonds as well as HDO rotations were 'filmed' in real time. It was found that molecular rotations are the principal agent of hydrogen bond breaking and making in water. Recent literatures covering the subject, including molecular dynamics simulations, are reviewed in detail.

  10. Differential Responses to a Visual Self-Motion Signal in Human Medial Cortical Regions Revealed by Wide-View Stimulation

    PubMed Central

    Wada, Atsushi; Sakano, Yuichi; Ando, Hiroshi

    2016-01-01

    Vision is important for estimating self-motion, which is thought to involve optic-flow processing. Here, we investigated the fMRI response profiles in visual area V6, the precuneus motion area (PcM), and the cingulate sulcus visual area (CSv)—three medial brain regions recently shown to be sensitive to optic-flow. We used wide-view stereoscopic stimulation to induce robust self-motion processing. Stimuli included static, randomly moving, and coherently moving dots (simulating forward self-motion). We varied the stimulus size and the presence of stereoscopic information. A combination of univariate and multi-voxel pattern analyses (MVPA) revealed that fMRI responses in the three regions differed from each other. The univariate analysis identified optic-flow selectivity and an effect of stimulus size in V6, PcM, and CSv, among which only CSv showed a significantly lower response to random motion stimuli compared with static conditions. Furthermore, MVPA revealed an optic-flow specific multi-voxel pattern in the PcM and CSv, where the discrimination of coherent motion from both random motion and static conditions showed above-chance prediction accuracy, but that of random motion from static conditions did not. Additionally, while area V6 successfully classified different stimulus sizes regardless of motion pattern, this classification was only partial in PcM and was absent in CSv. This may reflect the known retinotopic representation in V6 and the absence of such clear visuospatial representation in CSv. We also found significant correlations between the strength of subjective self-motion and univariate activation in all examined regions except for primary visual cortex (V1). This neuro-perceptual correlation was significantly higher for V6, PcM, and CSv when compared with V1, and higher for CSv when compared with the visual motion area hMT+. Our convergent results suggest the significant involvement of CSv in self-motion processing, which may give rise to its percept. PMID:26973588

  11. Spatial perception predicts laparoscopic skills on virtual reality laparoscopy simulator.

    PubMed

    Hassan, I; Gerdes, B; Koller, M; Dick, B; Hellwig, D; Rothmund, M; Zielke, A

    2007-06-01

    This study evaluates the influence of visual-spatial perception on laparoscopic performance of novices with a virtual reality simulator (LapSim(R)). Twenty-four novices completed standardized tests of visual-spatial perception (Lameris Toegepaste Natuurwetenschappelijk Onderzoek [TNO] Test(R) and Stumpf-Fay Cube Perspectives Test(R)) and laparoscopic skills were assessed objectively, while performing 1-h practice sessions on the LapSim(R), comprising of coordination, cutting, and clip application tasks. Outcome variables included time to complete the tasks, economy of motion as well as total error scores, respectively. The degree of visual-spatial perception correlated significantly with laparoscopic performance on the LapSim(R) scores. Participants with a high degree of spatial perception (Group A) performed the tasks faster than those (Group B) who had a low degree of spatial perception (p = 0.001). Individuals with a high degree of spatial perception also scored better for economy of motion (p = 0.021), tissue damage (p = 0.009), and total error (p = 0.007). Among novices, visual-spatial perception is associated with manual skills performed on a virtual reality simulator. This result may be important for educators to develop adequate training programs that can be individually adapted.

  12. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    PubMed

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  13. Event processing in the visual world: Projected motion paths during spoken sentence comprehension.

    PubMed

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-05-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Ethylene glycol revisited: Molecular dynamics simulations and visualization of the liquid and its hydrogen-bond network☆

    PubMed Central

    Kaiser, Alexander; Ismailova, Oksana; Koskela, Antti; Huber, Stefan E.; Ritter, Marcel; Cosenza, Biagio; Benger, Werner; Nazmutdinov, Renat; Probst, Michael

    2014-01-01

    Molecular dynamics simulations of liquid ethylene glycol described by the OPLS-AA force field were performed to gain insight into its hydrogen-bond structure. We use the population correlation function as a statistical measure for the hydrogen-bond lifetime. In an attempt to understand the complicated hydrogen-bonding, we developed new molecular visualization tools within the Vish Visualization shell and used it to visualize the life of each individual hydrogen-bond. With this tool hydrogen-bond formation and breaking as well as clustering and chain formation in hydrogen-bonded liquids can be observed directly. Liquid ethylene glycol at room temperature does not show significant clustering or chain building. The hydrogen-bonds break often due to the rotational and vibrational motions of the molecules leading to an H-bond half-life time of approximately 1.5 ps. However, most of the H-bonds are reformed again so that after 50 ps only 40% of these H-bonds are irreversibly broken due to diffusional motion. This hydrogen-bond half-life time due to diffusional motion is 80.3 ps. The work was preceded by a careful check of various OPLS-based force fields used in the literature. It was found that they lead to quite different angular and H-bond distributions. PMID:24748697

  15. Feasibility and concept study to convert the NASA/AMES vertical motion simulator to a helicopter simulator

    NASA Technical Reports Server (NTRS)

    Belsterling, C. A.; Chou, R. C.; Davies, E. G.; Tsui, K. C.

    1978-01-01

    The conceptual design for converting the vertical motion simulator (VMS) to a multi-purpose aircraft and helicopter simulator is presented. A unique, high performance four degrees of freedom (DOF) motion system was developed to permanently replace the present six DOF synergistic system. The new four DOF system has the following outstanding features: (1) will integrate with the two large VMS translational modes and their associated subsystems; (2) can be converted from helicopter to fixed-wing aircraft simulation through software changes only; (3) interfaces with an advanced cab/visual display system of large dimensions; (4) makes maximum use of proven techniques, convenient materials and off-the-shelf components; (5) will operate within the existing building envelope without modifications; (6) can be built within the specified weight limit and avoid compromising VMS performance; (7) provides maximum performance with a minimum of power consumption; (8) simple design minimizes coupling between motions and maximizes reliability; and (9) can be built within existing budgetary figures.

  16. Effects of visual, seat, and platform motion during flight simulator air transport pilot training and evaluation

    DOT National Transportation Integrated Search

    2009-04-27

    Access to affordable and effective flight-simulation training devices (FSTDs) is critical to safely train airline crews in aviating, navigating, communicating, making decisions, and managing flight-deck and crew resources. This paper provides an over...

  17. The Role of Visual and Nonvisual Information in the Control of Locomotion

    ERIC Educational Resources Information Center

    Wilkie, Richard M.; Wann, John P.

    2005-01-01

    During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled…

  18. Rigid Body Motion in Stereo 3D Simulation

    ERIC Educational Resources Information Center

    Zabunov, Svetoslav

    2010-01-01

    This paper addresses the difficulties experienced by first-grade students studying rigid body motion at Sofia University. Most quantities describing the rigid body are in relations that the students find hard to visualize and understand. They also lose the notion of cause-result relations between vector quantities, such as the relation between…

  19. Blurred digital mammography images: an analysis of technical recall and observer detection performance.

    PubMed

    Ma, Wang Kei; Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter

    2017-03-01

    Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ 2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ 2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring.

  20. Blurred digital mammography images: an analysis of technical recall and observer detection performance

    PubMed Central

    Borgen, Rita; Kelly, Judith; Millington, Sara; Hilton, Beverley; Aspin, Rob; Lança, Carla; Hogg, Peter

    2017-01-01

    Objective: Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. Methods: 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. Results: The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. Conclusion: According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring. PMID:28134567

  1. How to avoid simulation sickness in virtual environments during user displacement

    NASA Astrophysics Data System (ADS)

    Kemeny, A.; Colombet, F.; Denoual, T.

    2015-03-01

    Driving simulation (DS) and Virtual Reality (VR) share the same technologies for visualization and 3D vision and may use the same technics for head movement tracking. They experience also similar difficulties when rendering the displacements of the observer in virtual environments, especially when these displacements are carried out using driver commands, including steering wheels, joysticks and nomad devices. High values for transport delay, the time lag between the action and the corresponding rendering cues and/or visual-vestibular conflict, due to the discrepancies perceived by the human visual and vestibular systems when driving or displacing using a control device, induces the so-called simulation sickness. While the visual transport delay can be efficiently reduced using high frequency frame rate, the visual-vestibular conflict is inherent to VR, when not using motion platforms. In order to study the impact of displacements on simulation sickness, we have tested various driving scenarios in Renault's 5-sided ultra-high resolution CAVE. First results indicate that low speed displacements with longitudinal and lateral accelerations under a given perception thresholds are well accepted by a large number of users and relatively high values are only accepted by experienced users and induce VR induced symptoms and effects (VRISE) for novice users, with a worst case scenario corresponding to rotational displacements. These results will be used for optimization technics at Arts et Métiers ParisTech for motion sickness reduction in virtual environments for industrial, research, educational or gaming applications.

  2. Motion-base simulator results of advanced supersonic transport handling qualities with active controls

    NASA Technical Reports Server (NTRS)

    Feather, J. B.; Joshi, D. S.

    1981-01-01

    Handling qualities of the unaugmented advanced supersonic transport (AST) are deficient in the low-speed, landing approach regime. Consequently, improvement in handling with active control augmentation systems has been achieved using implicit model-following techniques. Extensive fixed-based simulator evaluations were used to validate these systems prior to tests with full motion and visual capabilities on a six-axis motion-base simulator (MBS). These tests compared the handling qualities of the unaugmented AST with several augmented configurations to ascertain the effectiveness of these systems. Cooper-Harper ratings, tracking errors, and control activity data from the MBS tests have been analyzed statistically. The results show the fully augmented AST handling qualities have been improved to an acceptable level.

  3. Correction for human head motion in helical x-ray CT

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Sun, T.; Alcheikh, A. R.; Kuncic, Z.; Nuyts, J.; Fulton, R.

    2016-02-01

    Correction for rigid object motion in helical CT can be achieved by reconstructing from a modified source-detector orbit, determined by the object motion during the scan. This ensures that all projections are consistent, but it does not guarantee that the projections are complete in the sense of being sufficient for exact reconstruction. We have previously shown with phantom measurements that motion-corrected helical CT scans can suffer from data-insufficiency, in particular for severe motions and at high pitch. To study whether such data-insufficiency artefacts could also affect the motion-corrected CT images of patients undergoing head CT scans, we used an optical motion tracking system to record the head movements of 10 healthy volunteers while they executed each of the 4 different types of motion (‘no’, slight, moderate and severe) for 60 s. From these data we simulated 354 motion-affected CT scans of a voxelized human head phantom and reconstructed them with and without motion correction. For each simulation, motion-corrected (MC) images were compared with the motion-free reference, by visual inspection and with quantitative similarity metrics. Motion correction improved similarity metrics in all simulations. Of the 270 simulations performed with moderate or less motion, only 2 resulted in visible residual artefacts in the MC images. The maximum range of motion in these simulations would encompass that encountered in the vast majority of clinical scans. With severe motion, residual artefacts were observed in about 60% of the simulations. We also evaluated a new method of mapping local data sufficiency based on the degree to which Tuy’s condition is locally satisfied, and observed that areas with high Tuy values corresponded to the locations of residual artefacts in the MC images. We conclude that our method can provide accurate and artefact-free MC images with most types of head motion likely to be encountered in CT imaging, provided that the motion can be accurately determined.

  4. Flight Simulation.

    DTIC Science & Technology

    1986-09-01

    TECHNICAL EVALUATION REPORT OF THE SYMPOSIUM ON "FLIGHT SIMULATION" A. M. Cook. NASA -Ames Research Center 1. INTRODUCIL𔃻N This report evaluates the 67th...John C. Ousterberry* NASA Ames Research Center Moffett Field, California 94035, U.S.A. SUMMARY Early AGARD papers on manned flight simulation...and developffent simulators. VISUAL AND MOTION CUEING IN HELICOPTER SIMULATION Nichard S. Bray NASA Ames Research Center Moffett Field, California

  5. Pilot Comments for High Speed Research Cycle 3 Simulations Study (LaRC.1)

    NASA Technical Reports Server (NTRS)

    Bailey, Melvin L. (Editor); Jackson, E. Bruce (Technical Monitor)

    2000-01-01

    This is a compilation of pilot comments from the Boeing High Speed Research Aircraft, Cycle 3 Simulation Study (LaRC.1) conducted from January to March 1997 at NASA Langley Research Center. This simulation study was conducted using the Visual Motion Simulator. The comments are direct tape transcriptions and have been edited for spelling only.

  6. Apollo Docking with the LEM Target

    NASA Image and Video Library

    2012-09-07

    Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. This picture shows a later configuration of the Apollo docking with the LEM target. A.W. Vogeley described the simulator as follows: The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect. -- Published in A.W. Vogeley, Piloted Space-Flight Simulation at Langley Research Center, Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966.

  7. Motion cue effects on human pilot dynamics in manual control

    NASA Technical Reports Server (NTRS)

    Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.

    1977-01-01

    Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.

  8. Performance, physiological, and oculometer evaluation of VTOL landing displays

    NASA Technical Reports Server (NTRS)

    North, R. A.; Stackhouse, S. P.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.

  9. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras

    PubMed Central

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-01

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots. PMID:24431144

  10. Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras.

    PubMed

    Kane, Suzanne Amador; Zamani, Marjon

    2014-01-15

    This study reports on experiments on falcons wearing miniature videocameras mounted on their backs or heads while pursuing flying prey. Videos of hunts by a gyrfalcon (Falco rusticolus), gyrfalcon (F. rusticolus)/Saker falcon (F. cherrug) hybrids and peregrine falcons (F. peregrinus) were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data were then interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. These results should prove relevant for understanding the co-evolution of pursuit and evasion, as well as the development of computer models of predation and the integration of sensory and locomotion systems in biomimetic robots.

  11. Status of NASA/Army rotorcraft research and development piloted flight simulation

    NASA Technical Reports Server (NTRS)

    Condon, Gregory W.; Gossett, Terrence D.

    1988-01-01

    The status of the major NASA/Army capabilities in piloted rotorcraft flight simulation is reviewed. The requirements for research and development piloted simulation are addressed as well as the capabilities and technologies that are currently available or are being developed by NASA and the Army at Ames. The application of revolutionary advances (in visual scene, electronic cockpits, motion, and modelling of interactive mission environments and/or vehicle systems) to the NASA/Army facilities are also addressed. Particular attention is devoted to the major advances made in integrating these individual capabilities into fully integrated simulation environment that were or are being applied to new rotorcraft mission requirements. The specific simulators discussed are the Vertical Motion Simulator and the Crew Station Research and Development Facility.

  12. Eye Movements Reveal the Dynamic Simulation of Speed in Language

    ERIC Educational Resources Information Center

    Speed, Laura J.; Vigliocco, Gabriella

    2014-01-01

    This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…

  13. The effects of motion and g-seat cues on pilot simulator performance of three piloting tasks

    NASA Technical Reports Server (NTRS)

    Showalter, T. W.; Parris, B. L.

    1980-01-01

    Data are presented that show the effects of motion system cues, g-seat cues, and pilot experience on pilot performance during takeoffs with engine failures, during in-flight precision turns, and during landings with wind shear. Eight groups of USAF pilots flew a simulated KC-135 using four different cueing systems. The basic cueing system was a fixed-base type (no-motion cueing) with visual cueing. The other three systems were produced by the presence of either a motion system or a g-seat, or both. Extensive statistical analysis of the data was performed and representative performance means were examined. These data show that the addition of motion system cueing results in significant improvement in pilot performance for all three tasks; however, the use of g-seat cueing, either alone or in conjunction with the motion system, provides little if any performance improvement for these tasks and for this aircraft type.

  14. Effects of False Tilt Cues on the Training of Manual Roll Control Skills

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Popovici, Alexandru; Zavala, Melinda A.

    2015-01-01

    This paper describes a transfer-of-training study performed in the NASA Ames Vertica lMotion Simulator. The purpose of the study was to investigate the effect of false tilt cues on training and transfer of training of manual roll control skills. Of specific interest were the skills needed to control unstable roll dynamics of a mid-size transport aircraft close to the stall point. Nineteen general aviation pilots trained on a roll control task with one of three motion conditions: no motion, roll motion only, or reduced coordinated roll motion. All pilots transferred to full coordinated roll motion in the transfer session. A novel multimodal pilot model identification technique was successfully applied to characterize how pilots' use of visual and motion cues changed over the course of training and after transfer. Pilots who trained with uncoordinated roll motion had significantly higher performance during training and after transfer, even though they experienced the false tilt cues. Furthermore, pilot control behavior significantly changed during the two sessions, as indicated by increasing visual and motion gains, and decreasing lead time constants. Pilots training without motion showed higher learning rates after transfer to the full coordinated roll motion case.

  15. Visualization in Mechanics: The Dynamics of an Unbalanced Roller

    ERIC Educational Resources Information Center

    Cumber, Peter S.

    2017-01-01

    It is well known that mechanical engineering students often find mechanics a difficult area to grasp. This article describes a system of equations describing the motion of a balanced and an unbalanced roller constrained by a pivot arm. A wide range of dynamics can be simulated with the model. The equations of motion are embedded in a graphical…

  16. Changes in the dark focus of accommodation associated with simulator sickness

    NASA Technical Reports Server (NTRS)

    Fowlkes, Jennifer E.; Kennedy, Robert S.; Hettinger, Lawrence J.; Harm, Deborah L.

    1993-01-01

    The relationship between the dark focus of accommodation and simulator sickness, a form of motion sickness, was examined in three experiments. In Experiment 1, dark focus was measured in 18 college students in a laboratory setting before and after they viewed a projected motion scene depicting low altitude helicopter flight. In Experiments 2 and 3, dark focus was measured in pilots (N = 16 and 23, respectively) before and after they 'flew' in moving-base helicopter flight simulators with optical infinity CRT visual systems. The results showed that individuals who experienced simulator sickness had either an inward (myopic) change in dark focus (Experiments 1 and 3) or attenuated outward shifts in dark focus (Experiment 2) relative to participants who did not get sick. These results are consonant with the hypothesis that parasympathetic activity, which may be associated with simulator sickness, should result in changes in dark focus that are in a myopic direction. Night vision goggles, virtual environments, extended periods in microgravity, and heads-up displays all produce related visual symptomatology. Changes in dark focus may occur in these conditions, as well, and should be measured.

  17. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  18. Future challenges for vection research: definitions, functional significance, measures, and neural bases

    PubMed Central

    Palmisano, Stephen; Allison, Robert S.; Schira, Mark M.; Barry, Robert J.

    2015-01-01

    This paper discusses four major challenges facing modern vection research. Challenge 1 (Defining Vection) outlines the different ways that vection has been defined in the literature and discusses their theoretical and experimental ramifications. The term vection is most often used to refer to visual illusions of self-motion induced in stationary observers (by moving, or simulating the motion of, the surrounding environment). However, vection is increasingly being used to also refer to non-visual illusions of self-motion, visually mediated self-motion perceptions, and even general subjective experiences (i.e., “feelings”) of self-motion. The common thread in all of these definitions is the conscious subjective experience of self-motion. Thus, Challenge 2 (Significance of Vection) tackles the crucial issue of whether such conscious experiences actually serve functional roles during self-motion (e.g., in terms of controlling or guiding the self-motion). After more than 100 years of vection research there has been surprisingly little investigation into its functional significance. Challenge 3 (Vection Measures) discusses the difficulties with existing subjective self-report measures of vection (particularly in the context of contemporary research), and proposes several more objective measures of vection based on recent empirical findings. Finally, Challenge 4 (Neural Basis) reviews the recent neuroimaging literature examining the neural basis of vection and discusses the hurdles still facing these investigations. PMID:25774143

  19. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation.

    PubMed

    Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S

    2007-11-01

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8 +/- 11% and 14 +/- 21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4 +/- 7% and 8 +/- 15% with and without audiovisual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.

  20. Dynamic registration of an optical see-through HMD into a wide field-of-view rotorcraft flight simulation environment

    NASA Astrophysics Data System (ADS)

    Viertler, Franz; Hajek, Manfred

    2015-05-01

    To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.

  1. Rendezvous Docking Simulator

    NASA Image and Video Library

    1964-10-29

    Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. "The LEM pilot's compartment, with overhead window and the docking ring (idealized since the pilot cannot see it during the maneuvers), is shown docked with the full-scale Apollo Command Module." A.W. Vogeley described the simulator as follows: "The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect." -- Published in A.W. Vogeley, "Piloted Space-Flight Simulation at Langley Research Center," Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966;

  2. Visualization of Dynamic Vortex Structures in Magnetic Films with Uniaxial Anisotropy (Micromagnetic Simulation)

    NASA Astrophysics Data System (ADS)

    Zverev, V. V.; Izmozherov, I. M.; Filippov, B. N.

    2018-02-01

    Three-dimensional computer simulation of dynamic processes in a moving domain boundary separating domains in a soft magnetic uniaxial film with planar anisotropy is performed by numerical solution of Landau-Lifshitz-Gilbert equations. The developed visualization methods are used to establish the connection between the motion of surface vortices and antivortices, singular (Bloch) points, and core lines of intrafilm vortex structures. A relation between the character of magnetization dynamics and the film thickness is found. The analytical models of spatial vortex structures for imitation of topological properties of the structures observed in micromagnetic simulation are constructed.

  3. Feedback and Elaboration within a Computer-Based Simulation: A Dual Coding Perspective.

    ERIC Educational Resources Information Center

    Rieber, Lloyd P.; And Others

    The purpose of this study was to explore how adult users interact and learn during a computer-based simulation given visual and verbal forms of feedback coupled with embedded elaborations of the content. A total of 52 college students interacted with a computer-based simulation of Newton's laws of motion in which they had control over the motion…

  4. ARC-2008-ACD08-0157-005

    NASA Image and Video Library

    2008-07-28

    NASA AA - Associate Administrator for Aeronautics Jai Shin visits Ames Research Center and tours the Vertical Motion Simulator (VMS, T-cab) Jaiwon Shin, Moffett Field Hangar 1 shows in the VMS visual scene.

  5. A Hypothetical Perspective on the Relative Contributions of Strategic and Adaptive Control Mechanisms in Plastic Recalibration of Locomotor Heading Direction

    NASA Technical Reports Server (NTRS)

    Richards, J. T.; Mulavara, A. P.; Ruttley, T.; Peters, B. T.; Warren, L. E.; Bloomberg, J. J.

    2006-01-01

    We have previously shown that viewing simulated rotary self-motion during treadmill locomotion causes adaptive modification of the control of position and trajectory during over-ground locomotion, which functionally reflects adaptive changes in the sensorimotor integration of visual, vestibular, and proprioceptive cues (Mulavara et al., 2005). The objective of this study was to investigate how strategic changes in torso control during exposure to simulated rotary self-motion during treadmill walking influences adaptive modification of locomotor heading direction during over-ground stepping.

  6. Simulation of cooperating robot manipulators on a mobile platform

    NASA Technical Reports Server (NTRS)

    Murphy, Steve H.; Wen, John T.; Saridis, George N.

    1990-01-01

    The dynamic equations of motion for two manipulators holding a common object on a freely moving mobile platform are developed. The full dynamic interactions from arms to platform and arm-tip to arm-tip are included in the formulation. The development of the closed chain dynamics allows for the use of any solution for the open topological tree of base and manipulator links. In particular, because the system has 18 degrees of freedom, recursive solutions for the dynamic simulation become more promising for efficient calculations of the motion. Simulation of the system is accomplished through a MATLAB program, and the response is visualized graphically using the SILMA Cimstation.

  7. High-power graphic computers for visual simulation: a real-time--rendering revolution

    NASA Technical Reports Server (NTRS)

    Kaiser, M. K.

    1996-01-01

    Advances in high-end graphics computers in the past decade have made it possible to render visual scenes of incredible complexity and realism in real time. These new capabilities make it possible to manipulate and investigate the interactions of observers with their visual world in ways once only dreamed of. This paper reviews how these developments have affected two preexisting domains of behavioral research (flight simulation and motion perception) and have created a new domain (virtual environment research) which provides tools and challenges for the perceptual psychologist. Finally, the current limitations of these technologies are considered, with an eye toward how perceptual psychologist might shape future developments.

  8. Self-Management of Patient Body Position, Pose, and Motion Using Wide-Field, Real-Time Optical Measurement Feedback: Results of a Volunteer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, James M.; Price, Gareth J., E-mail: gareth.price@christie.nhs.uk; Faculty of Medical and Human Sciences, Manchester Academic Health Sciences Centre, University of Manchester, Manchester

    2013-12-01

    Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup posemore » and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies.« less

  9. A preliminary study of MR sickness evaluation using visual motion aftereffect for advanced driver assistance systems.

    PubMed

    Nakajima, Sawako; Ino, Shuichi; Ifukube, Tohru

    2007-01-01

    Mixed Reality (MR) technologies have recently been explored in many areas of Human-Machine Interface (HMI) such as medicine, manufacturing, entertainment and education. However MR sickness, a kind of motion sickness is caused by sensory conflicts between the real world and virtual world. The purpose of this paper is to find out a new evaluation method of motion and MR sickness. This paper investigates a relationship between the whole-body vibration related to MR technologies and the motion aftereffect (MAE) phenomenon in the human visual system. This MR environment is modeled after advanced driver assistance systems in near-future vehicles. The seated subjects in the MR simulator were shaken in the pitch direction ranging from 0.1 to 2.0 Hz. Results show that MAE is useful for evaluation of MR sickness incidence. In addition, a method to reduce the MR sickness by auditory stimulation is proposed.

  10. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew Edie; Matthies, Larry H.

    2000-01-01

    We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.

  11. The LEAP™ Gesture Interface Device and Take-Home Laparoscopic Simulators: A Study of Construct and Concurrent Validity.

    PubMed

    Partridge, Roland W; Brown, Fraser S; Brennan, Paul M; Hennessey, Iain A M; Hughes, Mark A

    2016-02-01

    To assess the potential of the LEAP™ infrared motion tracking device to map laparoscopic instrument movement in a simulated environment. Simulator training is optimized when augmented by objective performance feedback. We explore the potential LEAP has to provide this in a way compatible with affordable take-home simulators. LEAP and the previously validated InsTrac visual tracking tool mapped expert and novice performances of a standardized simulated laparoscopic task. Ability to distinguish between the 2 groups (construct validity) and correlation between techniques (concurrent validity) were the primary outcome measures. Forty-three expert and 38 novice performances demonstrated significant differences in LEAP-derived metrics for instrument path distance (P < .001), speed (P = .002), acceleration (P < .001), motion smoothness (P < .001), and distance between the instruments (P = .019). Only instrument path distance demonstrated a correlation between LEAP and InsTrac tracking methods (novices: r = .663, P < .001; experts: r = .536, P < .001). Consistency of LEAP tracking was poor (average % time hands not tracked: 31.9%). The LEAP motion device is able to track the movement of hands using instruments in a laparoscopic box simulator. Construct validity is demonstrated by its ability to distinguish novice from expert performances. Only time and instrument path distance demonstrated concurrent validity with an existing tracking method however. A number of limitations to the tracking method used by LEAP have been identified. These need to be addressed before it can be considered an alternative to visual tracking for the delivery of objective performance metrics in take-home laparoscopic simulators. © The Author(s) 2015.

  12. Does Visual Salience of Action Affect Gesture Production?

    ERIC Educational Resources Information Center

    Yeo, Amelia; Alibali, Martha W.

    2018-01-01

    Past research suggests that speakers gesture more when motor simulations are more strongly activated. We investigate whether simulations of a perceptual nature also influence gesture production. Participants viewed animations of a spider moving with a manner of motion that was either highly salient (n = 29) or less salient (n = 31) and then…

  13. Simulation and animation of sensor-driven robots.

    PubMed

    Chen, C; Trivedi, M M; Bidlack, C R

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aid the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the users visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.

  14. Audio aided electro-tactile perception training for finger posture biofeedback.

    PubMed

    Vargas, Jose Gonzalez; Yu, Wenwei

    2008-01-01

    Visual information is one of the prerequisites for most biofeedback studies. The aim of this study is to explore how the usage of an audio aided training helps in the learning process of dynamical electro-tactile perception without any visual feedback. In this research, the electrical simulation patterns associated with the experimenter's finger postures and motions were presented to the subjects. Along with the electrical stimulation patterns 2 different types of information, verbal and audio information on finger postures and motions, were presented to the verbal training subject group (group 1) and audio training subject group (group 2), respectively. The results showed an improvement in the ability to distinguish and memorize electrical stimulation patterns correspondent to finger postures and motions without visual feedback, and with audio tones aid, the learning was faster and the perception became more precise after training. Thus, this study clarified that, as a substitution to visual presentation, auditory information could help effectively in the formation of electro-tactile perception. Further research effort needed to make clear the difference between the visual guided and audio aided training in terms of information compilation, post-training effect and robustness of the perception.

  15. Virtual reality aided visualization of fluid flow simulations with application in medical education and diagnostics.

    PubMed

    Djukic, Tijana; Mandic, Vesna; Filipovic, Nenad

    2013-12-01

    Medical education, training and preoperative diagnostics can be drastically improved with advanced technologies, such as virtual reality. The method proposed in this paper enables medical doctors and students to visualize and manipulate three-dimensional models created from CT or MRI scans, and also to analyze the results of fluid flow simulations. Simulation of fluid flow using the finite element method is performed, in order to compute the shear stress on the artery walls. The simulation of motion through the artery is also enabled. The virtual reality system proposed here could shorten the length of training programs and make the education process more effective. © 2013 Published by Elsevier Ltd.

  16. Apollo Rendezvous Docking Simulator

    NASA Image and Video Library

    1964-11-02

    Originally the Rendezvous was used by the astronauts preparing for Gemini missions. The Rendezvous Docking Simulator was then modified and used to develop docking techniques for the Apollo program. The pilot is shown maneuvering the LEM into position for docking with a full-scale Apollo Command Module. From A.W. Vogeley, Piloted Space-Flight Simulation at Langley Research Center, Paper presented at the American Society of Mechanical Engineers, 1966 Winter Meeting, New York, NY, November 27 - December 1, 1966. The Rendezvous Docking Simulator and also the Lunar Landing Research Facility are both rather large moving-base simulators. It should be noted, however, that neither was built primarily because of its motion characteristics. The main reason they were built was to provide a realistic visual scene. A secondary reason was that they would provide correct angular motion cues (important in control of vehicle short-period motions) even though the linear acceleration cues would be incorrect. Apollo Rendezvous Docking Simulator: Langley s Rendezvous Docking Simulator was developed by NASA scientists to study the complex task of docking the Lunar Excursion Module with the Command Module in Lunar orbit.

  17. The effect of force feedback on student reasoning about gravity, mass, force and motion

    NASA Astrophysics Data System (ADS)

    Bussell, Linda

    The purpose of this study was to examine whether force feedback within a computer simulation had an effect on reasoning by fifth grade students about gravity, mass, force, and motion, concepts which can be difficult for learners to grasp. Few studies have been done on cognitive learning and haptic feedback, particularly with young learners, but there is an extensive base of literature on children's conceptions of science and a number of studies focus specifically on children's conceptions of force and motion. This case study used a computer-based paddleball simulation with guided inquiry as the primary stimulus. Within the simulation, the learner could adjust the mass of the ball and the gravitational force. The experimental group used the simulation with visual and force feedback; the control group used the simulation with visual feedback but without force feedback. The proposition was that there would be differences in reasoning between the experimental and control groups, with force feedback being helpful with concepts that are more obvious when felt. Participants were 34 fifth-grade students from three schools. Students completed a modal (visual, auditory, and haptic) learning preference assessment and a pretest. The sessions, including participant experimentation and interviews, were audio recorded and observed. The interviews were followed by a written posttest. These data were analyzed to determine whether there were differences based on treatment, learning style, demographics, prior gaming experience, force feedback experience, or prior knowledge. Work with the simulation, regardless of group, was found to increase students' understanding of key concepts. The experimental group appeared to benefit from the supplementary help that force feedback provided. Those in the experimental group scored higher on the posttest than those in the control group. The greatest difference between mean group scores was on a question concerning the effects of increased gravitational force.

  18. The Experience of Force: The Role of Haptic Experience of Forces in Visual Perception of Object Motion and Interactions, Mental Simulation, and Motion-Related Judgments

    ERIC Educational Resources Information Center

    White, Peter A.

    2012-01-01

    Forces are experienced in actions on objects. The mechanoreceptor system is stimulated by proximal forces in interactions with objects, and experiences of force occur in a context of information yielded by other sensory modalities, principally vision. These experiences are registered and stored as episodic traces in the brain. These stored…

  19. Dropout during a driving simulator study: A survival analysis.

    PubMed

    Matas, Nicole A; Nettelbeck, Ted; Burns, Nicholas R

    2015-12-01

    Simulator sickness is the occurrence of motion-sickness like symptoms that can occur during use of simulators and virtual reality technologies. This study investigated individual factors that contributed to simulator sickness and dropout while using a desktop driving simulator. Eighty-eight older adult drivers (mean age 72.82±5.42years) attempted a practice drive and two test drives. Participants also completed a battery of cognitive and visual assessments, provided information on their health and driving habits, and reported their experience of simulator sickness symptoms throughout the study. Fifty-two participants dropped out before completing the driving tasks. A time-dependent Cox Proportional Hazards model showed that female gender (HR=2.02), prior motion sickness history (HR=2.22), and Mini-SSQ score (HR=1.55) were associated with dropout. There were no differences between dropouts and completers on any of the cognitive abilities tests. Older adults are a high-risk group for simulator sickness. Within this group, female gender and prior motion sickness history are related to simulator dropout. Higher reported experience of symptoms of simulator sickness increased rates of dropout. The results highlight the importance of screening and monitoring of participants in driving simulation studies. Older adults, females, and those with a prior history of motion sickness may be especially at risk. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

  20. An interactive driving simulation for driver control and decision-making research

    NASA Technical Reports Server (NTRS)

    Allen, R. W.; Hogge, J. R.; Schwartz, S. H.

    1975-01-01

    Display techniques and equations of motion for a relatively simple fixed base car simulation are described. The vehicle dynamics include simplified lateral (steering) and longitudinal (speed) degrees of freedom. Several simulator tasks are described which require a combination of operator control and decision making, including response to wind gust inputs, curved roads, traffic signal lights, and obstacles. Logic circuits are used to detect speeding, running red lights, and crashes. A variety of visual and auditory cues are used to give the driver appropriate performance feedback. The simulated equations of motion are reviewed and the technique for generating the line drawing CRT roadway display is discussed. On-line measurement capabilities and experimenter control features are presented, along with previous and current research results demonstrating simulation capabilities and applications.

  1. Pedestrian simulation and distribution in urban space based on visibility analysis and agent simulation

    NASA Astrophysics Data System (ADS)

    Ying, Shen; Li, Lin; Gao, Yurong

    2009-10-01

    Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.

  2. Fidelity assessment of a UH-60A simulation on the NASA Ames vertical motion simulator

    NASA Technical Reports Server (NTRS)

    Atencio, Adolph, Jr.

    1993-01-01

    Helicopter handling qualities research requires that a ground-based simulation be a high-fidelity representation of the actual helicopter, especially over the frequency range of the investigation. This experiment was performed to assess the current capability to simulate the UH-60A Black Hawk helicopter on the Vertical Motion Simulator (VMS) at NASA Ames, to develop a methodology for assessing the fidelity of a simulation, and to find the causes for lack of fidelity. The approach used was to compare the simulation to the flight vehicle for a series of tasks performed in flight and in the simulator. The results show that subjective handling qualities ratings from flight to simulator overlap, and the mathematical model matches the UH-60A helicopter very well over the range of frequencies critical to handling qualities evaluation. Pilot comments, however, indicate a need for improvement in the perceptual fidelity of the simulation in the areas of motion and visual cuing. The methodology used to make the fidelity assessment proved useful in showing differences in pilot work load and strategy, but additional work is needed to refine objective methods for determining causes of lack of fidelity.

  3. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  4. Pleasant music as a countermeasure against visually induced motion sickness.

    PubMed

    Keshavarz, Behrang; Hecht, Heiko

    2014-05-01

    Visually induced motion sickness (VIMS) is a well-known side-effect in virtual environments or simulators. However, effective behavioral countermeasures against VIMS are still sparse. In this study, we tested whether music can reduce the severity of VIMS. Ninety-three volunteers were immersed in an approximately 14-minute-long video taken during a bicycle ride. Participants were randomly assigned to one of four experimental groups, either including relaxing music, neutral music, stressful music, or no music. Sickness scores were collected using the Fast Motion Sickness Scale and the Simulator Sickness Questionnaire. Results showed an overall trend for relaxing music to reduce the severity of VIMS. When factoring in the subjective pleasantness of the music, a significant reduction of VIMS occurred only when the presented music was perceived as pleasant, regardless of the music type. In addition, we found a gender effect with women reporting more sickness than men. We assume that the presentation of pleasant music can be an effective, low-cost, and easy-to-administer method to reduce VIMS. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Motion Cues in Flight Simulation and Simulator Induced Sickness

    DTIC Science & Technology

    1988-06-01

    asseusod in a driving simulator by means of a response surface methodology central-composite design . The most salient finding of the study was that visual...across treatment conditions. For an orthogonal response surface methodology (IBM) design with only tro independent variables. it can be readily shown that...J.E.Fowikes 8 SESSION III - ETIOLOGICAL FACTORS IN SIMULATOR-INDUCED AFTER EFFETS THE USE OF VE& IIBULAR MODELS FOR DESIGN AND EVALUATION OF FLIGHT

  6. Motion Simulator

    NASA Technical Reports Server (NTRS)

    1993-01-01

    MOOG, Inc. supplies hydraulic actuators for the Space Shuttle. When MOOG learned NASA was interested in electric actuators for possible future use, the company designed them with assistance from Marshall Space Flight Center. They also decided to pursue the system's commercial potential. This led to partnership with InterActive Simulation, Inc. for production of cabin flight simulators for museums, expositions, etc. The resulting products, the Magic Motion Simulator 30 Series, are the first electric powered simulators. Movements are computer-guided, including free fall to heighten the sense of moving through space. A projection system provides visual effects, and the 11 speakers of a digital laser based sound system add to the realism. The electric actuators are easier to install, have lower operating costs, noise, heat and staff requirements. The U.S. Space & Rocket Center and several other organizations have purchased the simulators.

  7. Differential effect of visual motion adaption upon visual cortical excitability.

    PubMed

    Lubeck, Astrid J A; Van Ombergen, Angelique; Ahmad, Hena; Bos, Jelte E; Wuyts, Floris L; Bronstein, Adolfo M; Arshad, Qadeer

    2017-03-01

    The objectives of this study were 1 ) to probe the effects of visual motion adaptation on early visual and V5/MT cortical excitability and 2 ) to investigate whether changes in cortical excitability following visual motion adaptation are related to the degree of visual dependency, i.e., an overreliance on visual cues compared with vestibular or proprioceptive cues. Participants were exposed to a roll motion visual stimulus before, during, and after visual motion adaptation. At these stages, 20 transcranial magnetic stimulation (TMS) pulses at phosphene threshold values were applied over early visual and V5/MT cortical areas from which the probability of eliciting a phosphene was calculated. Before and after adaptation, participants aligned the subjective visual vertical in front of the roll motion stimulus as a marker of visual dependency. During adaptation, early visual cortex excitability decreased whereas V5/MT excitability increased. After adaptation, both early visual and V5/MT excitability were increased. The roll motion-induced tilt of the subjective visual vertical (visual dependence) was not influenced by visual motion adaptation and did not correlate with phosphene threshold or visual cortex excitability. We conclude that early visual and V5/MT cortical excitability is differentially affected by visual motion adaptation. Furthermore, excitability in the early or late visual cortex is not associated with an increase in visual reliance during spatial orientation. Our findings complement earlier studies that have probed visual cortical excitability following motion adaptation and highlight the differential role of the early visual cortex and V5/MT in visual motion processing. NEW & NOTEWORTHY We examined the influence of visual motion adaptation on visual cortex excitability and found a differential effect in V1/V2 compared with V5/MT. Changes in visual excitability following motion adaptation were not related to the degree of an individual's visual dependency. Copyright © 2017 the American Physiological Society.

  8. Optokinetic motion sickness - Attenuation of visually-induced apparent self-rotation by passive head movements

    NASA Technical Reports Server (NTRS)

    Teixeira, R. A.; Lackner, J. R.

    1979-01-01

    An experimental study was conducted on seven normal subjects to evaluate the effectiveness of passive head movements in suppressing the optokinetically-induced illusory self-rotation. Visual simulation was provided by a servo-controlled optokinetic drum. Each subject participated in two experimental sessions. In one condition, the subject's head remained stationary while he gazed passively at a moving stripe pattern. In the other, he gazed passively and relaxed his neck muscles while his head was rotated from side to side. It appears that suppression of optokinetically-induced illusory self-rotation with passive head movements results from the operation of a spatial constancy mechanism interrelating visual, vestibular, and kinesthetic information on ongoing body orientation. The results support the view that optokinetic 'motion sickness' is related, at least in part, to an oculomotor disturbance rather than a visually triggered disturbance of specifically vestibular etiology.

  9. Comprehensive Modeling and Visualization of Cardiac Anatomy and Physiology from CT Imaging and Computer Simulations

    PubMed Central

    Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.

    2016-01-01

    In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663

  10. Use of Linear Perspective Scene Cues in a Simulated Height Regulation Task

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Warren, R.

    1984-01-01

    As part of a long-term effort to quantify the effects of visual scene cuing and non-visual motion cuing in flight simulators, an experimental study of the pilot's use of linear perspective cues in a simulated height-regulation task was conducted. Six test subjects performed a fixed-base tracking task with a visual display consisting of a simulated horizon and a perspective view of a straight, infinitely-long roadway of constant width. Experimental parameters were (1) the central angle formed by the roadway perspective and (2) the display gain. The subject controlled only the pitch/height axis; airspeed, bank angle, and lateral track were fixed in the simulation. The average RMS height error score for the least effective display configuration was about 25% greater than the score for the most effective configuration. Overall, larger and more highly significant effects were observed for the pitch and control scores. Model analysis was performed with the optimal control pilot model to characterize the pilot's use of visual scene cues, with the goal of obtaining a consistent set of independent model parameters to account for display effects.

  11. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  12. Evaluation of g seat augmentation of fixed-base/moving base simulation for transport landings under two visually imposed runway width conditions

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Steinmetz, G. G.

    1983-01-01

    Vertical-motion cues supplied by a g-seat to augment platform motion cues in the other five degrees of freedom were evaluated in terms of their effect on objective performance measures obtained during simulated transport landings under visual conditions. In addition to evaluating the effects of the vertical cueing, runway width and magnification effects were investigated. The g-seat was evaluated during fixed base and moving-base operations. Although performance with the g-seat only improved slightly over that with fixed-base operation, combined g-seat platform operation showed no improvement over improvement over platform-only operation. When one runway width at one magnification factor was compared with another width at a different factor, the visual results indicated that the runway width probably had no effect on pilot-vehicle performance. The new performance differences that were detected may be more readily attributed to the extant (existing throughout) increase in vertical velocity induced by the magnification factor used to change the runway width, rather than to the width itself.

  13. Pitch body orientation influences the perception of self-motion direction induced by optic flow.

    PubMed

    Bourrelly, A; Vercher, J-L; Bringoux, L

    2010-10-04

    We studied the effect of static pitch body tilts on the perception of self-motion direction induced by a visual stimulus. Subjects were seated in front of a screen on which was projected a 3D cluster of moving dots visually simulating a forward motion of the observer with upward or downward directional biases (relative to a true earth horizontal direction). The subjects were tilted at various angles relative to gravity and were asked to estimate the direction of the perceived motion (nose-up, as during take-off or nose-down, as during landing). The data showed that body orientation proportionally affected the amount of error in the reported perceived direction (by 40% of body tilt magnitude in a range of +/-20 degrees) and these errors were systematically recorded in the direction of body tilt. As a consequence, a same visual stimulus was differently interpreted depending on body orientation. While the subjects were required to perform the task in a geocentric reference frame (i.e., relative to a gravity-related direction), they were obviously influenced by egocentric references. These results suggest that the perception of self-motion is not elaborated within an exclusive reference frame (either egocentric or geocentric) but rather results from the combined influence of both. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  14. 6 DOF Nonlinear AUV Simulation Toolbox

    DTIC Science & Technology

    1997-01-01

    is to supply a flexible 3D -simulation platform for motion visualization, in-lab debugging and testing of mission-specific strategies as well as those...Explorer are modular designed [Smith] in order to cut time and cost for vehicle recontlguration. A flexible 3D -simulation platform is desired to... 3D models. Current implemented modules include a nonlinear dynamic model for the OEX, shared memory and semaphore manager tools, shared memory monitor

  15. Visualization in mechanics: the dynamics of an unbalanced roller

    NASA Astrophysics Data System (ADS)

    Cumber, Peter S.

    2017-04-01

    It is well known that mechanical engineering students often find mechanics a difficult area to grasp. This article describes a system of equations describing the motion of a balanced and an unbalanced roller constrained by a pivot arm. A wide range of dynamics can be simulated with the model. The equations of motion are embedded in a graphical user interface for its numerical solution in MATLAB. This allows a student's focus to be on the influence of different parameters on the system dynamics. The simulation tool can be used as a dynamics demonstrator in a lecture or as an educational tool driven by the imagination of the student. By way of demonstration the simulation tool has been applied to a range of roller-pivot arm configurations. In addition, approximations to the equations of motion are explored and a second-order model is shown to be accurate for a limited range of parameters.

  16. Influence of Visual Motion, Suggestion, and Illusory Motion on Self-Motion Perception in the Horizontal Plane.

    PubMed

    Rosenblatt, Steven David; Crane, Benjamin Thomas

    2015-01-01

    A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.

  17. Fixed base simulator study of an externally blown flap STOL transport airplane during approach and landing

    NASA Technical Reports Server (NTRS)

    Grantham, W. D.; Nguyen, L. T.; Patton, J. M., Jr.; Deal, P. L.; Champine, R. A.; Carter, C. R.

    1972-01-01

    A fixed-base simulator study was conducted to determine the flight characteristics of a representative STOL transport having a high wing and equipped with an external-flow jet flap in combination with four high-bypass-ratio fan-jet engines during the approach and landing. Real-time digital simulation techniques were used. The computer was programed with equations of motion for six degrees of freedom and the aerodynamic inputs were based on measured wind-tunnel data. A visual display of a STOL airport was provided for simulation of the flare and touchdown characteristics. The primary piloting task was an instrument approach to a breakout at a 200-ft ceiling with a visual landing.

  18. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  19. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  20. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging

    PubMed Central

    Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A

    2014-01-01

    Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647

  1. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  2. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.

    PubMed

    Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz

    2015-01-01

    This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.

  3. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    PubMed

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  4. Treb-Bot: Development and Use of a Trebuchet Simulator

    NASA Astrophysics Data System (ADS)

    Constans, Eric; Constans, Aileen

    2015-09-01

    The trebuchet has quickly become a favorite project for physics and engineering teachers seeking to provide students with a simple, but spectacular, hands-on design project that can be applied to the study of projectile motion, rotational motion, and the law of conservation of energy. While there have been free trebuchet simulators and range calculators available online for several years, these have been limited to simple designs. Other simulators are available for a fee, precluding practical use in introductory courses. With this in mind, one of the authors developed a free web-based trebuchet simulation that can be found at http://www.benchtophybrid.com/TB_index.html. This simulation, named Treb-Bot, is designed to be visually appealing to high school students and includes simulations of trebuchet designs that are unavailable elsewhere on the web. The website was successfully field-tested by a group of Advanced Placement Physics 1 students.

  5. Ground motion simulations in Marmara (Turkey) region from 3D finite difference method

    NASA Astrophysics Data System (ADS)

    Aochi, Hideo; Ulrich, Thomas; Douglas, John

    2016-04-01

    In the framework of the European project MARSite (2012-2016), one of the main contributions from our research team was to provide ground-motion simulations for the Marmara region from various earthquake source scenarios. We adopted a 3D finite difference code, taking into account the 3D structure around the Sea of Marmara (including the bathymetry) and the sea layer. We simulated two moderate earthquakes (about Mw4.5) and found that the 3D structure improves significantly the waveforms compared to the 1D layer model. Simulations were carried out for different earthquakes (moderate point sources and large finite sources) in order to provide shake maps (Aochi and Ulrich, BSSA, 2015), to study the variability of ground-motion parameters (Douglas & Aochi, BSSA, 2016) as well as to provide synthetic seismograms for the blind inversion tests (Diao et al., GJI, 2016). The results are also planned to be integrated in broadband ground-motion simulations, tsunamis generation and simulations of triggered landslides (in progress by different partners). The simulations are freely shared among the partners via the internet and the visualization of the results is diffused on the project's homepage. All these simulations should be seen as a reference for this region, as they are based on the latest knowledge that obtained during the MARSite project, although their refinement and validation of the model parameters and the simulations are a continuing research task relying on continuing observations. The numerical code used, the models and the simulations are available on demand.

  6. Simulation and animation of sensor-driven robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C.; Trivedi, M.M.; Bidlack, C.R.

    1994-10-01

    Most simulation and animation systems utilized in robotics are concerned with simulation of the robot and its environment without simulation of sensors. These systems have difficulty in handling robots that utilize sensory feedback in their operation. In this paper, a new design of an environment for simulation, animation, and visualization of sensor-driven robots is presented. As sensor technology advances, increasing numbers of robots are equipped with various types of sophisticated sensors. The main goal of creating the visualization environment is to aide the automatic robot programming and off-line programming capabilities of sensor-driven robots. The software system will help the usersmore » visualize the motion and reaction of the sensor-driven robot under their control program. Therefore, the efficiency of the software development is increased, the reliability of the software and the operation safety of the robot are ensured, and the cost of new software development is reduced. Conventional computer-graphics-based robot simulation and animation software packages lack of capabilities for robot sensing simulation. This paper describes a system designed to overcome this deficiency.« less

  7. In-vehicle group activity modeling and simulation in sensor-based virtual environment

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.

  8. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  9. Local motion adaptation enhances the representation of spatial structure at EMD arrays

    PubMed Central

    Lindemann, Jens P.; Egelhaaf, Martin

    2017-01-01

    Neuronal representation and extraction of spatial information are essential for behavioral control. For flying insects, a plausible way to gain spatial information is to exploit distance-dependent optic flow that is generated during translational self-motion. Optic flow is computed by arrays of local motion detectors retinotopically arranged in the second neuropile layer of the insect visual system. These motion detectors have adaptive response characteristics, i.e. their responses to motion with a constant or only slowly changing velocity decrease, while their sensitivity to rapid velocity changes is maintained or even increases. We analyzed by a modeling approach how motion adaptation affects signal representation at the output of arrays of motion detectors during simulated flight in artificial and natural 3D environments. We focused on translational flight, because spatial information is only contained in the optic flow induced by translational locomotion. Indeed, flies, bees and other insects segregate their flight into relatively long intersaccadic translational flight sections interspersed with brief and rapid saccadic turns, presumably to maximize periods of translation (80% of the flight). With a novel adaptive model of the insect visual motion pathway we could show that the motion detector responses to background structures of cluttered environments are largely attenuated as a consequence of motion adaptation, while responses to foreground objects stay constant or even increase. This conclusion even holds under the dynamic flight conditions of insects. PMID:29281631

  10. Snow rendering for interactive snowplow simulation : supporting safety in snowplow design.

    DOT National Transportation Integrated Search

    2011-02-01

    During a snowfall, following a snowplow can be extremely dangerous. This danger comes from the human visual : systems inability to accurately perceive the speed and motion of the snowplow, often resulting in rear-end : collisions. For this project...

  11. Instructor and student pilots' subjective evaluation of a general aviation simulator with a terrain visual system

    NASA Technical Reports Server (NTRS)

    Kiteley, G. W.; Harris, R. L., Sr.

    1978-01-01

    Ten student pilots were given a 1 hour training session in the NASA Langley Research Center's General Aviation Simulator by a certified flight instructor and a follow-up flight evaluation was performed by the student's own flight instructor, who has also flown the simulator. The students and instructors generally felt that the simulator session had a positive effect on the students. They recommended that a simulator with a visual scene and a motion base would be useful in performing such maneuvers as: landing approaches, level flight, climbs, dives, turns, instrument work, and radio navigation, recommending that the simulator would be an efficient means of introducing the student to new maneuvers before doing them in flight. The students and instructors estimated that about 8 hours of simulator time could be profitably devoted to the private pilot training.

  12. Characteristics of Reduction Gear in Electric Agricultural Vehicle

    NASA Astrophysics Data System (ADS)

    Choi, W. S.; Pratama, P. S.; Supeno, D.; Jeong, S. W.; Byun, J. Y.; Woo, J. H.; Lee, E. S.; Park, C. S.

    2018-03-01

    In electric agricultural machine a reduction gear is needed to convert the high speed rotation motion generated by DC motor to lower speed rotation motion used by the vehicle. The reduction gear consists of several spur gears. Spur gears are the most easily visualized gears that transmit motion between two parallel shafts and easy to produce. The modelling and simulation of spur gears in DC motor reduction gear is important to predict the actual motion behaviour. A pair of spur gear tooth in action is generally subjected to two types of cyclic stress: contact stress and bending stress. The stress may not attain their maximum values at the same point of contact fatigue. These types of failure can be minimized by analysis of the problem during the design stage and creating proper tooth surface profile with proper manufacturing methods. To improve its life expectation in this study modal and stress analysis of reduction gear is simulated using ANSYS workbench based on finite element method (FEM). The modal analysis was done to understand reduction gear deformation behaviour when vibration occurs. FEM static stress analysis is also simulated on reduction gear to simulate the gear teeth bending stress and contact stress behaviour.

  13. A Review of Motion Sickness with Special Reference to Simulator Sickness

    DTIC Science & Technology

    1985-04-15

    Harris & Graybiel’, 1964). Notable exceptions are the loss in visual acuity and tracking problems associated with vestibular nystagmus when the visual...Dilated pupils during qmesis. Small pupils. Nystagmus . Adapted from Nicogossian & Parker. 1982. $ 𔃽.3.> , NAVTRAEQUIPCEN 81-C-0105-16 probably will...E., Crampton. W. E., & Posner, J. B. Effects of mental activity on vestibular nystagmus and the electro- encephalogram. Nature, 1961, 190, 194-195

  14. Rocinante, a virtual collaborative visualizer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, M.J.; Ice, L.G.

    1996-12-31

    With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less

  15. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    NASA Astrophysics Data System (ADS)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  16. Effects of Motion Cues on the Training of Multi-Axis Manual Control Skills

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Mobertz, Xander R. I.

    2017-01-01

    The study described in this paper investigated the effects of two different hexapod motion configurations on the training and transfer of training of a simultaneous roll and pitch control task. Pilots were divided between two groups which trained either under a baseline hexapod motion condition, with motion typically provided by current training simulators, or an optimized hexapod motion condition, with increased fidelity of the motion cues most relevant for the task. All pilots transferred to the same full-motion condition, representing motion experienced in flight. A cybernetic approach was used that gave insights into the development of pilots use of visual and motion cues over the course of training and after transfer. Based on the current results, neither of the hexapod motion conditions can unambiguously be chosen as providing the best motion for training and transfer of training of the used multi-axis control task. However, the optimized hexapod motion condition did allow pilots to generate less visual lead, control with higher gains, and have better disturbance-rejection performance at the end of the training session compared to the baseline hexapod motion condition. Significant adaptations in control behavior still occurred in the transfer phase under the full-motion condition for both groups. Pilots behaved less linearly compared to previous single-axis control-task experiments; however, this did not result in smaller motion or learning effects. Motion and learning effects were more pronounced in pitch compared to roll. Finally, valuable lessons were learned that allow us to improve the adopted approach for future transfer-of-training studies.

  17. Oculo-vestibular recoupling using galvanic vestibular stimulation to mitigate simulator sickness.

    PubMed

    Cevette, Michael J; Stepanek, Jan; Cocco, Daniela; Galea, Anna M; Pradhan, Gaurav N; Wagner, Linsey S; Oakley, Sarah R; Smith, Benn E; Zapala, David A; Brookler, Kenneth H

    2012-06-01

    Despite improvement in the computational capabilities of visual displays in flight simulators, intersensory visual-vestibular conflict remains the leading cause of simulator sickness (SS). By using galvanic vestibular stimulation (GVS), the vestibular system can be synchronized with a moving visual field in order to lessen the mismatch of sensory inputs thought to result in SS. A multisite electrode array was used to deliver combinations of GVS in 21 normal subjects. Optimal electrode combinations were identified and used to establish GVS dose-response predictions for the perception of roll, pitch, and yaw. Based on these data, an algorithm was then implemented in flight simulator hardware in order to synchronize visual and GVS-induced vestibular sensations (oculo-vestibular-recoupled or OVR simulation). Subjects were then randomly exposed to flight simulation either with or without OVR simulation. A self-report SS checklist was administered to all subjects after each session. An overall SS score was calculated for each category of symptoms for both groups. The analysis of GVS stimulation data yielded six unique combinations of electrode positions inducing motion perceptions in the three rotational axes. This provided the algorithm used for OVR simulation. The overall SS scores for gastrointestinal, central, and peripheral categories were 17%, 22.4%, and 20% for the Control group and 6.3%, 20%, and 8% for the OVR group, respectively. When virtual head signals produced by GVS are synchronized to the speed and direction of a moving visual field, manifestations of induced SS in a cockpit flight simulator are significantly reduced.

  18. Software Tools for Developing and Simulating the NASA LaRC CMF Motion Base

    NASA Technical Reports Server (NTRS)

    Bryant, Richard B., Jr.; Carrelli, David J.

    2006-01-01

    The NASA Langley Research Center (LaRC) Cockpit Motion Facility (CMF) motion base has provided many design and analysis challenges. In the process of addressing these challenges, a comprehensive suite of software tools was developed. The software tools development began with a detailed MATLAB/Simulink model of the motion base which was used primarily for safety loads prediction, design of the closed loop compensator and development of the motion base safety systems1. A Simulink model of the digital control law, from which a portion of the embedded code is directly generated, was later added to this model to form a closed loop system model. Concurrently, software that runs on a PC was created to display and record motion base parameters. It includes a user interface for controlling time history displays, strip chart displays, data storage, and initializing of function generators used during motion base testing. Finally, a software tool was developed for kinematic analysis and prediction of mechanical clearances for the motion system. These tools work together in an integrated package to support normal operations of the motion base, simulate the end to end operation of the motion base system providing facilities for software-in-the-loop testing, mechanical geometry and sensor data visualizations, and function generator setup and evaluation.

  19. Simulation and evaluation of the Sh-2F helicopter in a shipboard environment using the interchangeable cab system

    NASA Technical Reports Server (NTRS)

    Paulk, C. H., Jr.; Astill, D. L.; Donley, S. T.

    1983-01-01

    The operation of the SH-2F helicopter from the decks of small ships in adverse weather was simulated using a large amplitude vertical motion simulator, a wide angle computer generated imagery visual system, and an interchangeable cab (ICAB). The simulation facility, the mathematical programs, and the validation method used to ensure simulation fidelity are described. The results show the simulator to be a useful tool in simulating the ship-landing problem. Characteristics of the ICAB system and ways in which the simulation can be improved are presented.

  20. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    PubMed

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  1. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.

    PubMed

    McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-07-15

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.

  2. Dissection of Drosophila Visual Circuits Implicative in Figure Motion

    NASA Astrophysics Data System (ADS)

    Kelley, Ross G.

    The Drosophila visual system offers a model to study the foundations of how motion signals are computed from raw visual input and transformed into behavioral output. My studies focus on how specific cells in the Drosophila nervous system implement this input-output transformation. The individual cell types are known from classical studies using Golgi impregnations, but the assembly of motion processing circuits and the behavioral outputs remain poorly understood. Using an electronic flight simulator for flies and a white-noise analysis developed by Aptekar et al., I screen specific neurons in the optic lobes for behavioral ramifications. This approach produces wing responses to both the spatial and temporal dynamics of motion signals. The results of these experiments give Spatiotemporal Action Fields (STAFs) across the entire visual panorama. Genetically inactivating a distinct grouping of cells in the third optic ganglion, the Lobula Plate, the Horizontal System (HS) cell group, produced a robust phenotype through STAF analysis. Using the Gal4-UAS transgene expression system, we selectively inactivated the HS cells by expressing in their membrane inward rectifying potassium channels (Kir2.1) to hyperpolarize these cells, preventing their role in synaptic signaling. The results of the experiments show mutants lose steering responses to several distinct categories of figure motion and reduced behavioral responses to figure motion set against a contrasting moving background, highlighting their role in figure tracking behavior. Finally, a synapse inactivating protein, tetanus toxin (TNT), expressed in the HS cell group, produces a different behavioral phenotype than overexpressing inward rectifier. TNT, a bacterial neurotoxin, cleaves SNARE proteins resulting in loss of synaptic output of the cell, but the dendrites are intact and signal normally, preserving dendro-dendritic interactions known to sculpt the visual receptive fields of these cells. The two distinct phenotypes to each genetically targeted silencer differentiate the functional role of dendritic integration versus axonal output in this important cell group.

  3. Runway Texture and Grid Pattern Effects on Rate-of-Descent Perception

    NASA Technical Reports Server (NTRS)

    Schroeder, J. A.; Dearing, M. G.; Sweet, B. T.; Kaiser, M. K.; Rutkowski, Mike (Technical Monitor)

    2001-01-01

    To date, perceptual errors occur in determining descent rate from a computer-generated image in flight simulation. Pilots tend to touch down twice as hard in simulation than in flight, and more training time is needed in simulation before reaching steady-state performance. Barnes suggested that recognition of range may be the culprit, and he cited that problems such as collimated objects, binocular vision, and poor resolution lead to poor estimation of the velocity vector. Brown's study essentially ruled out that the lack of binocular vision is the problem. Dorfel added specificity to the problem by showing that pilots underestimated range in simulated scenes by 50% when 800 ft from the runway threshold. Palmer and Petitt showed that pilots are able to distinguish between a 1.7 ft/sec and 2.9 ft/sec sink rate when passively observing sink rates in a night scene. Platform motion also plays a role, as previous research has shown that the addition of substantial platform motion improves pilot estimates of vertical velocity and results in simulated touchdown rates more closely resembling flight. This experiment examined how some specific variations in the visual scene properties affect a pilot's perception of sink rate. It extended another experiment that focused on the visual and motion cues necessary for helicopter autorotations. In that experiment, pilots performed steep approaches to a runway. The visual content of the runway and its surroundings varied in two ways: texture and rectangular grid spacing. Four textures, included a no-texture case, were evaluated. Three grid spacings, including a no-grid case, were evaluated. The results showed that pilot better controlled their vertical descent rates when good texture cues were present. No significant differences were found for the grid manipulation. Using those visual scenes a simple psychophysics, experiment was performed. The purpose was to determine if the variations in the visual scenes allowed pilots to better perceive vertical velocity. To determine that answer, pilots passively viewed a particular visual scene in which the vehicle was descending at two different rates. Pilots had to select which of the two rates they thought was the fastest rate. The difference between the two rates changed using a staircase method, depending on whether or not the pilot was correct, until a minimum threshold between the two descent rates was reached. This process was repeated for all of the visual scenes to decide whether or not the visual scenes did allow pilots to perceive vertical velocity better among them. All of the data have yet to be analyzed; however, neither the effects of grid nor texture revealed any statistically significant trends. On further examination of the staircase method employed, a possibility exists that the lack of an evident trend may be due to the exit criterion used during the study. As such, the experiment will be repeated with an improved exit criterion in February. Results of this study will be presented in the submitted paper.

  4. Methodology for functional MRI of simulated driving.

    PubMed

    Kan, Karen; Schweizer, Tom A; Tam, Fred; Graham, Simon J

    2013-01-01

    The developed world faces major socioeconomic and medical challenges associated with motor vehicle accidents caused by risky driving. Functional magnetic resonance imaging (fMRI) of individuals using virtual reality driving simulators may provide an important research tool to assess driving safety, based on brain activity and behavior. A fMRI-compatible driving simulator was developed and evaluated in the context of straight driving, turning, and stopping in 16 young healthy adults. Robust maps of brain activity were obtained, including activation of the primary motor cortex, cerebellum, visual cortex, and parietal lobe, with limited head motion (<1.5 mm deviation from mean head position in the superior∕inferior direction in all subjects) and only minor correlations between head motion, steering, or braking behavior. These results are consistent with previous literature and suggest that with care, fMRI of simulated driving is a feasible undertaking.

  5. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  6. Virtual Reality simulator for dental anesthesia training in the inferior alveolar nerve block.

    PubMed

    Corrêa, Cléber Gimenez; Machado, Maria Aparecida de Andrade Moreira; Ranzini, Edith; Tori, Romero; Nunes, Fátima de Lourdes Santos

    2017-01-01

    This study shows the development and validation of a dental anesthesia-training simulator, specifically for the inferior alveolar nerve block (IANB). The system developed provides the tactile sensation of inserting a real needle in a human patient, using Virtual Reality (VR) techniques and a haptic device that can provide a perceived force feedback in the needle insertion task during the anesthesia procedure. To simulate a realistic anesthesia procedure, a Carpule syringe was coupled to a haptic device. The Volere method was used to elicit requirements from users in the Dentistry area; Repeated Measures Two-Way ANOVA (Analysis of Variance), Tukey post-hoc test and averages for the results' analysis. A questionnaire-based subjective evaluation method was applied to collect information about the simulator, and 26 people participated in the experiments (12 beginners, 12 at intermediate level, and 2 experts). The questionnaire included profile, preferences (number of viewpoints, texture of the objects, and haptic device handler), as well as visual (appearance, scale, and position of objects) and haptic aspects (motion space, tactile sensation, and motion reproduction). The visual aspect was considered appropriate and the haptic feedback must be improved, which the users can do by calibrating the virtual tissues' resistance. The evaluation of visual aspects was influenced by the participants' experience, according to ANOVA test (F=15.6, p=0.0002, with p<0.01). The user preferences were the simulator with two viewpoints, objects with texture based on images and the device with a syringe coupled to it. The simulation was considered thoroughly satisfactory for the anesthesia training, considering the needle insertion task, which includes the correct insertion point and depth, as well as the perception of tissues resistances during the insertion.

  7. Neural mechanisms underlying sound-induced visual motion perception: An fMRI study.

    PubMed

    Hidaka, Souta; Higuchi, Satomi; Teramoto, Wataru; Sugita, Yoichi

    2017-07-01

    Studies of crossmodal interactions in motion perception have reported activation in several brain areas, including those related to motion processing and/or sensory association, in response to multimodal (e.g., visual and auditory) stimuli that were both in motion. Recent studies have demonstrated that sounds can trigger illusory visual apparent motion to static visual stimuli (sound-induced visual motion: SIVM): A visual stimulus blinking at a fixed location is perceived to be moving laterally when an alternating left-right sound is also present. Here, we investigated brain activity related to the perception of SIVM using a 7T functional magnetic resonance imaging technique. Specifically, we focused on the patterns of neural activities in SIVM and visually induced visual apparent motion (VIVM). We observed shared activations in the middle occipital area (V5/hMT), which is thought to be involved in visual motion processing, for SIVM and VIVM. Moreover, as compared to VIVM, SIVM resulted in greater activation in the superior temporal area and dominant functional connectivity between the V5/hMT area and the areas related to auditory and crossmodal motion processing. These findings indicate that similar but partially different neural mechanisms could be involved in auditory-induced and visually-induced motion perception, and neural signals in auditory, visual, and, crossmodal motion processing areas closely and directly interact in the perception of SIVM. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task Setting

    DTIC Science & Technology

    2008-05-01

    AFRL-RH-WP-SR-2009-0002 The Influence of Tactual Seat-motion Cues on Training and Performance in a Roll-axis Compensatory Tracking Task...and Performance in a Roll-axis Compensatory Tracking Task Setting 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62202F 6. AUTHOR(S...simulated vehicle having aircraft-like dynamics. A centrally located compensatory display, subtending about nine degrees, provided visual roll error

  9. Turning behaviour depends on frictional damping in the fruit fly Drosophila.

    PubMed

    Hesselberg, Thomas; Lehmann, Fritz-Olaf

    2007-12-01

    Turning behaviour in the fruit fly Drosophila depends on several factors including not only feedback from sensory organs and muscular control of wing motion, but also the mass moments of inertia and the frictional damping coefficient of the rotating body. In the present study we evaluate the significance of body friction for yaw turning and thus the limits of visually mediated flight control in Drosophila, by scoring tethered flies flying in a flight simulator on their ability to visually compensate a bias on a moving object and a visual background panorama at different simulated frictional dampings. We estimated the fly's natural damping coefficient from a numerical aerodynamic model based on both friction on the body and the flapping wings during saccadic turning. The model predicts a coefficient of 54 x 10(-12) Nm s, which is more than 100-times larger than the value estimated from a previous study on the body alone. Our estimate suggests that friction plays a larger role for yaw turning in Drosophila than moments of inertia. The simulator experiments showed that visual performance of the fruit fly collapses near the physical conditions estimated for freely flying animals, which is consistent with the suggested role of the halteres for flight stabilization. However, kinematic analyses indicate that the measured loss of flight control might be due predominantly to the limited fine control in the fly's steering muscles below a threshold of 1-2 degrees stroke amplitude, rather than resulting from the limits of visual motion detection by the fly's compound eyes. We discuss the impact of these results and suggest that the elevated frictional coefficient permits freely flying fruit flies to passively terminate rotational body movements without producing counter-torque during the second half of the saccadic turning manoeuvre.

  10. Simulation Study of Impact of Aeroelastic Characteristics on Flying Qualities of a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Raney, David L.; Jackson, E. Bruce; Buttrill, Carey S.

    2002-01-01

    A piloted simulation study conducted in NASA Langley Visual Motion Simulator addressed the impact of dynamic aero- servoelastic effects on flying qualities of a High Speed Civil Transport. The intent was to determine effectiveness of measures to reduce the impact of aircraft flexibility on piloting tasks. Potential solutions examined were increasing frequency of elastic modes through structural stiffening, increasing damping of elastic modes through active control, elimination of control effector excitation of the lowest frequency elastic modes, and elimination of visual cues associated with elastic modes. Six test pilots evaluated and performed simulated maneuver tasks, encountering incidents wherein cockpit vibrations due to elastic modes fed back into the control stick through involuntary vibrations of the pilots upper body and arm. Structural stiffening and compensation of the visual display were of little benefit in alleviating this impact, while increased damping and elimination of control effector excitation of the elastic modes both offered great improvements when applied in sufficient degree.

  11. Smelling directions: Olfaction modulates ambiguous visual motion perception

    PubMed Central

    Kuang, Shenbing; Zhang, Tao

    2014-01-01

    Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162

  12. A high-speed photographic system for flow visualization in a steam turbine

    NASA Technical Reports Server (NTRS)

    Barna, G. J.

    1973-01-01

    A photographic system was designed to visualize the moisture flow in a steam turbine. Good performance of the system was verified using dry turbine mockups in which an aerosol spray simulated, in a rough way, the moisture flow in the turbine. Borescopes and fiber-optic light tubes were selected as the general instrumentation approach. High speed motion-picture photographs of the liquid flow over the stator blade surfaces were taken using stroboscopic lighting. Good visualization of the liquid flow was obtained. Still photographs of drops in flight were made using short duration flash sources. Drops with diameters as small as 30 micrometers (0.0012 in.) could be resolved. In addition, motion pictures of a spray of water simulating the spray off the rotor blades and shrouds were taken at normal framing rates. Specially constructed light tubes containing small tungsten-halogen lamps were used. Sixteen millimeter photography was used in all cases. Two potential problems resulting from the two-phase turbine flow (attenuation and scattering of light by the fog present and liquid accumulation on the borescope mirrors) were taken into account in the photographic system design but not evaluated experimentally.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X; Sisniega, A; Zbijewski, W

    Purpose: Visualization and quantification of coronary artery calcification and atherosclerotic plaque benefits from coronary artery motion (CAM) artifact elimination. This work applies a rigid linear motion model to a Volume of Interest (VoI) for estimating motion estimation and compensation of image degradation in Coronary Computed Tomography Angiography (CCTA). Methods: In both simulation and testbench experiments, translational CAM was generated by displacement of the imaging object (i.e. simulated coronary artery and explanted human heart) by ∼8 mm, approximating the motion of a main coronary branch. Rotation was assumed to be negligible. A motion degraded region containing a calcification was selected asmore » the VoI. Local residual motion was assumed to be rigid and linear over the acquisition window, simulating motion observed during diastasis. The (negative) magnitude of the image gradient of the reconstructed VoI was chosen as the motion estimation objective and was minimized with Covariance Matrix Adaptation Evolution Strategy (CMAES). Results: Reconstruction incorporated the estimated CAM yielded signification recovery of fine calcification structures as well as reduced motion artifacts within the selected local region. The compensated reconstruction was further evaluated using two image similarity metrics, the structural similarity index (SSIM) and Root Mean Square Error (RMSE). At the calcification site, the compensated data achieved a 3% increase in SSIM and a 91.2% decrease in RMSE in comparison with the uncompensated reconstruction. Conclusion: Results demonstrate the feasibility of our image-based motion estimation method exploiting a local rigid linear model for CAM compensation. The method shows promising preliminary results for the application of such estimation in CCTA. Further work will involve motion estimation of complex motion corrupted patient data acquired from clinical CT scanner.« less

  14. A visually guided collision warning system with a neuromorphic architecture.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2008-12-01

    We have designed a visually guided collision warning system with a neuromorphic architecture, employing an algorithm inspired by the visual nervous system of locusts. The system was implemented with mixed analog-digital integrated circuits consisting of an analog resistive network and field-programmable gate array (FPGA) circuits. The resistive network processes the interaction between the laterally spreading excitatory and inhibitory signals instantaneously, which is essential for real-time computation of collision avoidance with a low power consumption and a compact hardware. The system responded selectively to approaching objects of simulated movie images at close range. The system was, however, confronted with serious noise problems due to the vibratory ego-motion, when it was installed in a mobile miniature car. To overcome this problem, we developed the algorithm, which is also installable in FPGA circuits, in order for the system to respond robustly during the ego-motion.

  15. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    PubMed

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Neuroticism modulates brain visuo-vestibular and anxiety systems during a virtual rollercoaster task.

    PubMed

    Riccelli, Roberta; Indovina, Iole; Staab, Jeffrey P; Nigro, Salvatore; Augimeri, Antonio; Lacquaniti, Francesco; Passamonti, Luca

    2017-02-01

    Different lines of research suggest that anxiety-related personality traits may influence the visual and vestibular control of balance, although the brain mechanisms underlying this effect remain unclear. To our knowledge, this is the first functional magnetic resonance imaging (fMRI) study that investigates how individual differences in neuroticism and introversion, two key personality traits linked to anxiety, modulate brain regional responses and functional connectivity patterns during a fMRI task simulating self-motion. Twenty-four healthy individuals with variable levels of neuroticism and introversion underwent fMRI while performing a virtual reality rollercoaster task that included two main types of trials: (1) trials simulating downward or upward self-motion (vertical motion), and (2) trials simulating self-motion in horizontal planes (horizontal motion). Regional brain activity and functional connectivity patterns when comparing vertical versus horizontal motion trials were correlated with personality traits of the Five Factor Model (i.e., neuroticism, extraversion-introversion, openness, agreeableness, and conscientiousness). When comparing vertical to horizontal motion trials, we found a positive correlation between neuroticism scores and regional activity in the left parieto-insular vestibular cortex (PIVC). For the same contrast, increased functional connectivity between the left PIVC and right amygdala was also detected as a function of higher neuroticism scores. Together, these findings provide new evidence that individual differences in personality traits linked to anxiety are significantly associated with changes in the activity and functional connectivity patterns within visuo-vestibular and anxiety-related systems during simulated vertical self-motion. Hum Brain Mapp 38:715-726, 2017. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  17. Gemini Simulator and Neil Armstrong

    NASA Image and Video Library

    1963-11-06

    Astronaut Neil Armstrong (left) was one of 14 astronauts, 8 NASA test pilots, and 2 McDonnell test pilots who took part in simulator studies. Armstrong was the first astronaut to participate (November 6, 1963). A.W. Vogeley described the simulator in his paper "Discussion of Existing and Planned Simulators For Space Research," "Many of the astronauts have flown this simulator in support of the Gemini studies and they, without exception, appreciated the realism of the visual scene. The simulator has also been used in the development of pilot techniques to handle certain jet malfunctions in order that aborts could be avoided. In these situations large attitude changes are sometimes necessary and the false motion cues that were generated due to earth gravity were somewhat objectionable; however, the pilots were readily able to overlook these false motion cues in favor of the visual realism." Roy F. Brissenden, noted in his paper "Initial Operations with Langley's Rendezvous Docking Facility," "The basic Gemini control studies developed the necessary techniques and demonstrated the ability of human pilots to perform final space docking with the specified Gemini-Agena systems using only visual references. ... Results... showed that trained astronauts can effect the docking with direct acceleration control and even with jet malfunctions as long as good visual conditions exist.... Probably more important than data results was the early confidence that the astronauts themselves gained in their ability to perform the maneuver in the ultimate flight mission." Francis B. Smith, noted in his paper "Simulators for Manned Space Research," "Some major areas of interest in these flights were fuel requirements, docking accuracies, the development of visual aids to assist alignment of the vehicles, and investigation of alternate control techniques with partial failure modes. However, the familiarization and confidence developed by the astronaut through flying and safely docking the simulator during these tests was one of the major contributions. For example, it was found that fuel used in docking from 200 feet typically dropped from about 20 pounds to 7 pounds after an astronaut had made a few training flights." -- Published in Barton C. Hacker and James M. Grimwood, On the Shoulders of Titans: A History of Project Gemini, NASA SP-4203; A.W. Vogeley, "Discussion of Existing and Planned Simulators For Space Research," Paper presented at the Conference on the Role of Simulation in Space Technology, August 17-21, 1964; Roy F. Brissenden, "Initial Operations with Langley's Rendezvous Docking Facility," Langley Working Paper, LWP-21, 1964; Francis B. Smith, "Simulators for Manned Space Research," Paper presented at the 1966 IEEE International convention, March 21-25, 1966.

  18. Use of an adjustable hand plate in studying the perceived horizontal plane during simulated flight.

    PubMed

    Tribukait, Arne; Eiken, Ola; Lemming, Dag; Levin, Britta

    2013-07-01

    Quantitative data on spatial orientation would be valuable not only in assessing the fidelity of flight simulators, but also in evaluation of spatial orientation training. In this study a manual indicator was used for recording the subjective horizontal plane during simulated flight. In a six-degrees-of-freedom hexapod hydraulic motion platform simulator, simulating an F-16 aircraft, seven fixed-wing student pilots were passively exposed to two flight sequences. The first consisted in a number of coordinated turns with visual contact with the landscape below. The visually presented roll tilt was up to a maximum 670. The second was a takeoff with a cabin pitch up of 100, whereupon external visual references were lost. The subjects continuously indicated, with the left hand on an adjustable plate, what they perceived as horizontal in roll and pitch. There were two test occasions separated by a 3-d course on spatial disorientation. Responses to changes in simulated roll were, in general, instantaneous. The indicated roll tilt was approximately 30% of the visually presented roll. There was a considerable interindividual variability. However, for the roll response there was a correlation between the two occasions. The amplitude of the response to the pitch up of the cabin was approximately 75%; the response decayed much more slowly than the stimulus. With a manual indicator for recording the subjective horizontal plane, individual characteristics in the response to visual tilt stimuli may be detected, suggesting a potential for evaluation of simulation algorithms or training programs.

  19. Human postural responses to motion of real and virtual visual environments under different support base conditions.

    PubMed

    Mergner, T; Schweigart, G; Maurer, C; Blümle, A

    2005-12-01

    The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.

  20. Moving-base visual simulation study of decoupled controls during approach and landing of a STOL transport aircraft

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.; Deal, P. L.

    1975-01-01

    The simulation employed all six rigid-body degrees of freedom and incorporated aerodynamic characteristics based on wind-tunnel data. The flight instrumentation included a localizer and a flight director which was used to capture and to maintain a two-segment glide slope. A closed-circuit television display of a STOLport provided visual cues during simulations of the approach and landing. The decoupled longitudinal controls used constant prefilter and feedback gains to provide steady-state decoupling of flight-path angle, pitch angle, and forward velocity. The pilots were enthusiastic about the decoupled longitudinal controls and believed that the simulator motion was an aid in evaluating the decoupled controls, although a minimum turbulence level with root-mean-square gust intensity of 0.3 m/sec (1 ft/sec) was required to mask undesirable characteristics of the moving-base simulator.

  1. The Microcomputer and Instruction in Geometry.

    ERIC Educational Resources Information Center

    Kantowski, Mary Grace

    1981-01-01

    The microcomputer has great potential for making high school geometry more stimulating and more easily understood by the students. The microcomputer can facilitate instruction in both the logico-deductive and spatial-visual aspects of geometry through graphics representations, simulation of motion, and its capability of interacting with the…

  2. People can understand descriptions of motion without activating visual motion brain regions

    PubMed Central

    Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina

    2013-01-01

    What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592

  3. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  4. Differential contributions to the interception of occluded ballistic trajectories by the temporoparietal junction, area hMT/V5+, and the intraparietal cortex.

    PubMed

    Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco

    2017-09-01

    The ability to catch objects when transiently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, interception of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5+ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5+, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer-simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1 , trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5+ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2 , trajectories were entirely visible and participants received tpTMS on TPJ and hMT/V5+ with same timing as experiment 1 tpTMS of TPJ, hMT/V5+, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. NEW & NOTEWORTHY Visual extrapolation represents a potential neural solution to afford motor interactions with the environment in the face of missing information. We investigated relative contributions by temporoparietal junction (TPJ), hMT/V5+, and intraparietal cortex (IPS), cortical areas potentially involved in these processes. Parallel organization of visual extrapolation processes emerged with respect to the target's motion causal nature: TPJ was primarily involved for visual motion congruent with gravity effects, IPS for arbitrary visual motion, whereas hMT/V5+ contributed at earlier processing stages. Copyright © 2017 the American Physiological Society.

  5. Simulator-induced spatial disorientation: effects of age, sleep deprivation, and type of conflict.

    PubMed

    Previc, Fred H; Ercoline, William R; Evans, Richard H; Dillon, Nathan; Lopez, Nadia; Daluz, Christina M; Workman, Andrew

    2007-05-01

    Spatial disorientation mishaps are greater at night and with greater time on task, and sleep deprivation is known to decrease cognitive and overall flight performance. However, the ability to perceive and to be influenced by physiologically appropriate simulated SD conflicts has not previously been studied in an automated simulator flight profile. A set of 10 flight profiles were flown by 10 U.S. Air Force (USAF) pilots over a period of 28 h in a specially designed flight simulator for spatial disorientation research and training. Of the 10 flights, 4 had a total of 7 spatial disorientation (SD) conflicts inserted into each of them, 5 simulating motion illusions and 2 involving visual illusions. The percentage of conflict reports was measured along with the effects of four conflicts on flight performance. The results showed that, with one exception, all motion conflicts were reported over 60% of the time, whereas the two visual illusions were reported on average only 25% of the time, although they both significantly affected flight performance. Pilots older than 35 yr of age were more likely to report conflicts than were those under 30 yr of age (63% vs. 38%), whereas fatigue had little effect overall on either recognized or unrecognized SD. The overall effects of these conflicts on perception and performance were generally not altered by sleep deprivation, despite clear indications of fatigue in our pilots.

  6. Piloted Simulation Assessment of a High-Speed Civil Transport Configuration. [conducted with the Langley six-degree-of-freedom Visual Motion Simulator

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Raney, David L.; Glaab, Louis J.; Derry, Stephen D.

    2002-01-01

    An assessment of a proposed configuration of a high-speed civil transport was conducted by using NASA and industry research pilots. The assessment was conducted to evaluate operational aspects of the configuration from a pilot's perspective, with the primary goal being to identify potential deficiencies in the configuration. The configuration was evaluated within and at the limits of the design operating envelope to determine the suitability of the configuration to maneuver in a typical mission as well as in emergency or envelope-limit conditions. The Cooper-Harper rating scale was used to evaluate the flying qualities of the configuration. A summary flying qualities metric was also calculated. The assessment was performed in the Langley six-degree-of-freedom Visual Motion Simulator. The effect of a restricted cockpit field-of-view due to obstruction by the vehicle nose was not included in this study. Tasks include landings, takeoffs, climbs, descents, overspeeds, coordinated turns, and recoveries from envelope limit excursions. Emergencies included engine failures, loss of stability augmentation, engine inlet unstarts, and emergency descents. Minimum control speeds and takeoff decision, rotation, and safety speeds were also determined.

  7. Characterizing head motion in three planes during combined visual and base of support disturbances in healthy and visually sensitive subjects.

    PubMed

    Keshner, E A; Dhaher, Y

    2008-07-01

    Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.

  8. Mental imagery of gravitational motion.

    PubMed

    Gravano, Silvio; Zago, Myrka; Lacquaniti, Francesco

    2017-10-01

    There is considerable evidence that gravitational acceleration is taken into account in the interaction with falling targets through an internal model of Earth gravity. Here we asked whether this internal model is accessed also when target motion is imagined rather than real. In the main experiments, naïve participants grasped an imaginary ball, threw it against the ceiling, and caught it on rebound. In different blocks of trials, they had to imagine that the ball moved under terrestrial gravity (1g condition) or under microgravity (0g) as during a space flight. We measured the speed and timing of the throwing and catching actions, and plotted ball flight duration versus throwing speed. Best-fitting duration-speed curves estimate the laws of ball motion implicit in the participant's performance. Surprisingly, we found duration-speed curves compatible with 0g for both the imaginary 0g condition and the imaginary 1g condition, despite the familiarity with Earth gravity effects and the added realism of performing the throwing and catching actions. In a control experiment, naïve participants were asked to throw the imaginary ball vertically upwards at different heights, without hitting the ceiling, and to catch it on its way down. All participants overestimated ball flight durations relative to the durations predicted by the effects of Earth gravity. Overall, the results indicate that mental imagery of motion does not have access to the internal model of Earth gravity, but resorts to a simulation of visual motion. Because visual processing of accelerating/decelerating motion is poor, visual imagery of motion at constant speed or slowly varying speed appears to be the preferred mode to perform the tasks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Methodology development for evaluation of selective-fidelity rotorcraft simulation

    NASA Technical Reports Server (NTRS)

    Lewis, William D.; Schrage, D. P.; Prasad, J. V. R.; Wolfe, Daniel

    1992-01-01

    This paper addressed the initial step toward the goal of establishing performance and handling qualities acceptance criteria for realtime rotorcraft simulators through a planned research effort to quantify the system capabilities of 'selective fidelity' simulators. Within this framework the simulator is then classified based on the required task. The simulator is evaluated by separating the various subsystems (visual, motion, etc.) and applying corresponding fidelity constants based on the specific task. This methodology not only provides an assessment technique, but also provides a technique to determine the required levels of subsystem fidelity for a specific task.

  10. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  11. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  12. Perceived orientation of a runway model in nonpilots during simulated night approaches to landing.

    DOT National Transportation Integrated Search

    1977-07-01

    Illusions due to reduced visual cues at night have long been cited as contributing to the dangerous tendency of pilots to fly too low during night landing approaches. The cue of motion parallax (a difference in rate of apparent movement of objects in...

  13. Visible Motion Blur

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)

    2014-01-01

    A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.

  14. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  15. Optimal Configuration of Human Motion Tracking Systems: A Systems Engineering Approach

    NASA Technical Reports Server (NTRS)

    Henderson, Steve

    2005-01-01

    Human motion tracking systems represent a crucial technology in the area of modeling and simulation. These systems, which allow engineers to capture human motion for study or replication in virtual environments, have broad applications in several research disciplines including human engineering, robotics, and psychology. These systems are based on several sensing paradigms, including electro-magnetic, infrared, and visual recognition. Each of these paradigms requires specialized environments and hardware configurations to optimize performance of the human motion tracking system. Ideally, these systems are used in a laboratory or other facility that was designed to accommodate the particular sensing technology. For example, electromagnetic systems are highly vulnerable to interference from metallic objects, and should be used in a specialized lab free of metal components.

  16. Simulating patient-specific heart shape and motion using SPECT perfusion images with the MCAT phantom

    NASA Astrophysics Data System (ADS)

    Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.

    2001-05-01

    The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.

  17. Characterizing Head Motion in 3 Planes during Combined Visual and Base of Support Disturbances in Healthy and Visually Sensitive Subjects

    PubMed Central

    Keshner, E.A.; Dhaher, Y.

    2008-01-01

    Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29–31 years) and 3 visually sensitive (27–57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a 3-dimensional model of joint motion11 was developed to examine gross head motion in 3 planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field can modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms. PMID:18162402

  18. Rolling into spatial disorientation: simulator demonstration of the post-roll (Gillingham) illusion.

    PubMed

    Nooij, Suzanne A E; Groen, Eric L

    2011-05-01

    Spatial disorientation (SD) is still a contributing factor in many aviation accidents, stressing the need for adequate SD training scenarios. In this article we focused on the post-roll effect (the sensation of rolling back after a roll maneuver, such as an entry of a coordinated turn) and investigated the effect of roll stimuli on the pilot's ability to stabilize their roll attitude. This resulted in a ground-based demonstration scenario for pilots. The experiments took place in the advanced 6-DOF Desdemona motion simulator, with the subject in a supine position. Roll motions were either fully automated with the subjects blindfolded (BLIND), automated with the subject viewing the cockpit interior (COCKPIT), or self-controlled (LEAD). After the roll stimulus subjects had to cancel all perceived simulator motion without any visual feedback. Both the roll velocity and duration were varied. In 68% of all trials subjects corrected for the perceived motion of rolling back by initiating a roll motion in the same direction as the preceeding roll. The effect was dependent on both rate and duration, in a manner consistent with semicircular canal dynamics. The effect was smallest in the BLIND scenario, but differences between simulation scenarios were non-significant. The results show that the effects of the post-roll illusion on aircraft control can be demonstrated adequately in a flight simulator using an attitude control task. The effect is present even after short roll movements, occurring frequently in flight. Therefore this demonstration is relevant for spatial disorientation training programs for pilots.

  19. Effects of motion speed in action representations

    PubMed Central

    van Dam, Wessel O.; Speed, Laura J.; Lai, Vicky T.; Vigliocco, Gabriella; Desai, Rutvik H.

    2017-01-01

    Grounded cognition accounts of semantic representation posit that brain regions traditionally linked to perception and action play a role in grounding the semantic content of words and sentences. Sensory-motor systems are thought to support partially abstract simulations through which conceptual content is grounded. However, which details of sensory-motor experience are included in, or excluded from these simulations, is not well understood. We investigated whether sensory-motor brain regions are differentially involved depending on the speed of actions described in a sentence. We addressed this issue by examining the neural signature of relatively fast (The old lady scurried across the road) and slow (The old lady strolled across the road) action sentences. The results showed that sentences that implied fast motion modulated activity within the right posterior superior temporal sulcus and the angular and middle occipital gyri, areas associated with biological motion and action perception. Sentences that implied slow motion resulted in greater signal within the right primary motor cortex and anterior inferior parietal lobule, areas associated with action execution and planning. These results suggest that the speed of described motion influences representational content and modulates the nature of conceptual grounding. Fast motion events are represented more visually whereas motor regions play a greater role in representing conceptual content associated with slow motion. PMID:28160739

  20. Annotated Bibliography of USAARL Technical and Letter Reports. Volume 2. October 1988 - April 1991

    DTIC Science & Technology

    1991-05-01

    G. Lilienthal, Robert S. Kennedy, Jennifer E. Fowlkes, and Dennis R. Baltzley. As technelogy has been developed to provide improved visual and motion...Gower, Jr., and Jennifer Fowlkes. The U.S. Army Aeromedical Research Laboratory conducted field studies of operational flight simulators to assess the...Daniel W. Gower, Jr., and Jennifer Fowlkes. The U.S. Army Aeromedical Research Laboratory conducted field studies of operational flight simulators to

  1. Modeling Fault Diagnosis Performance on a Marine Powerplant Simulator.

    DTIC Science & Technology

    1985-08-01

    two definitions are very similar. They emphasize that fidelity is a two dimensional -:oncept. They also pointed out the measurement prob- lems. Tasks...simulator duplicares cne enscr-: ztimulation, 4. . rnamic motion cues , visual :ues, ec. ?svcno ogicai fidelity is simply the degree to which the trainee...functions is only acceptable if the performance is paced by tne system, i.e., cues from the system serve to initiate elementary, skilled sub-routines

  2. Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.

    PubMed

    Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M

    2010-01-01

    Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.

  3. Accuracy and Tuning of Flow Parsing for Visual Perception of Object Motion During Self-Motion

    PubMed Central

    Niehorster, Diederick C.

    2017-01-01

    How do we perceive object motion during self-motion using visual information alone? Previous studies have reported that the visual system can use optic flow to identify and globally subtract the retinal motion component resulting from self-motion to recover scene-relative object motion, a process called flow parsing. In this article, we developed a retinal motion nulling method to directly measure and quantify the magnitude of flow parsing (i.e., flow parsing gain) in various scenarios to examine the accuracy and tuning of flow parsing for the visual perception of object motion during self-motion. We found that flow parsing gains were below unity for all displays in all experiments; and that increasing self-motion and object motion speed did not alter flow parsing gain. We conclude that visual information alone is not sufficient for the accurate perception of scene-relative motion during self-motion. Although flow parsing performs global subtraction, its accuracy also depends on local motion information in the retinal vicinity of the moving object. Furthermore, the flow parsing gain was constant across common self-motion or object motion speeds. These results can be used to inform and validate computational models of flow parsing. PMID:28567272

  4. Simulation of nap-of-the-Earth flight in helicopters

    NASA Technical Reports Server (NTRS)

    Condon, Gregory W.

    1991-01-01

    NASA-Ames along with the U.S. Army has conducted extensive simulation studies of rotorcraft in the nap-of-the-Earth (NOE) environment and has developed facility capabilities specifically designed for this flight regime. The experience gained to date in applying these facilities to the NOE flight regime are reported along with the results of specific experimental studies conducted to understand the influence of both motion and visual scene on the fidelity of NOE simulation. Included are comparisons of results from concurrent piloted simulation and flight research studies. The results of a recent simulation experiment to study simulator sickness in this flight regime is also discussed.

  5. Postural time-to-contact as a precursor of visually induced motion sickness.

    PubMed

    Li, Ruixuan; Walter, Hannah; Curry, Christopher; Rath, Ruth; Peterson, Nicolette; Stoffregen, Thomas A

    2018-06-01

    The postural instability theory of motion sickness predicts that subjective symptoms of motion sickness will be preceded by unstable control of posture. In previous studies, this prediction has been confirmed with measures of the spatial magnitude and the temporal dynamics of postural activity. In the present study, we examine whether precursors of visually induced motion sickness might exist in postural time-to-contact, a measure of postural activity that is related to the risk of falling. Standing participants were exposed to oscillating visual motion stimuli in a standard laboratory protocol. Both before and during exposure to visual motion stimuli, we monitored the kinematics of the body's center of pressure. We predicted that postural activity would differ between participants who reported motion sickness and those who did not, and that these differences would exist before participants experienced subjective symptoms of motion sickness. During exposure to visual motion stimuli, the multifractality of sway differed between the Well and Sick groups. Postural time-to-contact differed between the Well and Sick groups during exposure to visual motion stimuli, but also before exposure to any motion stimuli. The results provide a qualitatively new type of support for the postural instability theory of motion sickness.

  6. Transfer of perceptual adaptation to space sickness: What enhances an individual's ability to adapt?

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The objectives of this project were to explore systematically the determiners of transfer of perceptual adaptation as these principles might apply to the space adaptation syndrome. The perceptual experience of an astronaut exposed to the altered gravitational forces involved in spaceflight shares much with that of the subject exposed in laboratory experiments to optically induced visual rearrangement with tilt and dynamic motion illusions such as vection; and experiences and symptoms reported by the trainee who is exposed to the compellingly realistic visual imagery of flight simulators and virtual reality systems. In both of these cases the observer is confronted with a variety of inter- and intrasensory conflicts that initially disrupt perception, as well as behavior, and also produce symptoms of motion sickness.

  7. Rehabilitation of Visual and Perceptual Dysfunction after Severe Traumatic Brain Injury

    DTIC Science & Technology

    2014-05-01

    Aguilar C, Hall-Haro C. Decay of prism aftereffects under passive and active conditions. Cogn Brain Res. 2004;20:92-97. 13. Kornheiser A. Adaptation...17. Huxlin KR, Martin T, Kelly K, et al. Perceptual relearning of complex visual motion after V1 damage in humans. J Neurosci . 2009;29:3981-3991...questionnaires. Restor Neurol Neurosci . 2004;22:399-420. 19. Peli E, Bowers AR, Mandel AJ, Higgins K, Goldstein RB, Bobrow L. Design of driving simulator

  8. Evidence for auditory-visual processing specific to biological motion.

    PubMed

    Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F

    2012-01-01

    Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.

  9. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackwell, Matt; Rodger, Arthur; Kennedy, Tom

    When the California Academy of Sciences created the "Earthquake: Evidence of a Restless Planet" exhibit, they called on Lawrence Livermore to help combine seismic research with the latest data-driven visualization techniques. The outcome is a series of striking visualizations of earthquakes, tsunamis and tectonic plate evolution. Seismic-wave research is a core competency at Livermore. While most often associated with earthquakes, the research has many other applications of national interest, such as nuclear explosion monitoring, explosion forensics, energy exploration, and seismic acoustics. For the Academy effort, Livermore researchers simulated the San Andreas and Hayward fault events at high resolutions. Such calculationsmore » require significant computational resources. To simulate the 1906 earthquake, for instance, visualizing 125 seconds of ground motion required over 1 billion grid points, 10,000 time steps, and 7.5 hours of processor time on 2,048 cores of Livermore's Sierra machine.« less

  11. Supercomputing meets seismology in earthquake exhibit

    ScienceCinema

    Blackwell, Matt; Rodger, Arthur; Kennedy, Tom

    2018-02-14

    When the California Academy of Sciences created the "Earthquake: Evidence of a Restless Planet" exhibit, they called on Lawrence Livermore to help combine seismic research with the latest data-driven visualization techniques. The outcome is a series of striking visualizations of earthquakes, tsunamis and tectonic plate evolution. Seismic-wave research is a core competency at Livermore. While most often associated with earthquakes, the research has many other applications of national interest, such as nuclear explosion monitoring, explosion forensics, energy exploration, and seismic acoustics. For the Academy effort, Livermore researchers simulated the San Andreas and Hayward fault events at high resolutions. Such calculations require significant computational resources. To simulate the 1906 earthquake, for instance, visualizing 125 seconds of ground motion required over 1 billion grid points, 10,000 time steps, and 7.5 hours of processor time on 2,048 cores of Livermore's Sierra machine.

  12. Neural Mechanisms of Cortical Motion Computation Based on a Neuromorphic Sensory System

    PubMed Central

    Abdul-Kreem, Luma Issa; Neumann, Heiko

    2015-01-01

    The visual cortex analyzes motion information along hierarchically arranged visual areas that interact through bidirectional interconnections. This work suggests a bio-inspired visual model focusing on the interactions of the cortical areas in which a new mechanism of feedforward and feedback processing are introduced. The model uses a neuromorphic vision sensor (silicon retina) that simulates the spike-generation functionality of the biological retina. Our model takes into account two main model visual areas, namely V1 and MT, with different feature selectivities. The initial motion is estimated in model area V1 using spatiotemporal filters to locally detect the direction of motion. Here, we adapt the filtering scheme originally suggested by Adelson and Bergen to make it consistent with the spike representation of the DVS. The responses of area V1 are weighted and pooled by area MT cells which are selective to different velocities, i.e. direction and speed. Such feature selectivity is here derived from compositions of activities in the spatio-temporal domain and integrating over larger space-time regions (receptive fields). In order to account for the bidirectional coupling of cortical areas we match properties of the feature selectivity in both areas for feedback processing. For such linkage we integrate the responses over different speeds along a particular preferred direction. Normalization of activities is carried out over the spatial as well as the feature domains to balance the activities of individual neurons in model areas V1 and MT. Our model was tested using different stimuli that moved in different directions. The results reveal that the error margin between the estimated motion and synthetic ground truth is decreased in area MT comparing with the initial estimation of area V1. In addition, the modulated V1 cell activations shows an enhancement of the initial motion estimation that is steered by feedback signals from MT cells. PMID:26554589

  13. The Impact of Structural Vibration on Flying Qualities of a Supersonic Transport

    NASA Technical Reports Server (NTRS)

    Raney, David L.; Jackson, E. Bruce; Buttrill, Carey S.; Adams, William M.

    2001-01-01

    A piloted simulation experiment has been conducted in the NASA Langley Visual/Motion Simulator facility to address the impact of dynamic aeroelastic effects on flying qualities of a supersonic transport. The intent of this experiment was to determine the effectiveness of several measures that may be taken to reduce the impact of aircraft flexibility on piloting tasks. Potential solutions that were examined included structural stiffening, active vibration suppression, and elimination of visual cues associated with the elastic modes. A series of parametric configurations was evaluated by six test pilots for several types of maneuver tasks. During the investigation, several incidents were encountered in which cockpit vibrations due to elastic modes fed back into the control stick through involuntary motions of the pilot's upper body and arm. The phenomenon, referred to as biodynamic coupling, is evidenced by a resonant peak in the power spectrum of the pilot's stick inputs at a structural mode frequency. The results of the investigation indicate that structural stiffening and compensation of the visual display were of little benefit in alleviating the impact of elastic dynamics on the piloting tasks, while increased damping and elimination of control-effector excitation of the lowest frequency modes offered great improvements when applied in sufficient degree.

  14. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  15. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    PubMed Central

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  16. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    PubMed

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  17. Cybersickness provoked by head-mounted display affects cutaneous vascular tone, heart rate and reaction time.

    PubMed

    Nalivaiko, Eugene; Davis, Simon L; Blackmore, Karen L; Vakulin, Andrew; Nesbitt, Keith V

    2015-11-01

    Evidence from studies of provocative motion indicates that motion sickness is tightly linked to the disturbances of thermoregulation. The major aim of the current study was to determine whether provocative visual stimuli (immersion into the virtual reality simulating rides on a rollercoaster) affect skin temperature that reflects thermoregulatory cutaneous responses, and to test whether such stimuli alter cognitive functions. In 26 healthy young volunteers wearing head-mounted display (Oculus Rift), simulated rides consistently provoked vection and nausea, with a significant difference between the two versions of simulation software (Parrot Coaster and Helix). Basal finger temperature had bimodal distribution, with low-temperature group (n=8) having values of 23-29 °C, and high-temperature group (n=18) having values of 32-36 °C. Effects of cybersickness on finger temperature depended on the basal level of this variable: in subjects from former group it raised by 3-4 °C, while in most subjects from the latter group it either did not change or transiently reduced by 1.5-2 °C. There was no correlation between the magnitude of changes in the finger temperature and nausea score at the end of simulated ride. Provocative visual stimulation caused prolongation of simple reaction time by 20-50 ms; this increase closely correlated with the subjective rating of nausea. Lastly, in subjects who experienced pronounced nausea, heart rate was elevated. We conclude that cybersickness is associated with changes in cutaneous thermoregulatory vascular tone; this further supports the idea of a tight link between motion sickness and thermoregulation. Cybersickness-induced prolongation of reaction time raises obvious concerns regarding the safety of this technology. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Motion Cueing Algorithm Development: Human-Centered Linear and Nonlinear Approaches

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.

    2005-01-01

    While the performance of flight simulator motion system hardware has advanced substantially, the development of the motion cueing algorithm, the software that transforms simulated aircraft dynamics into realizable motion commands, has not kept pace. Prior research identified viable features from two algorithms: the nonlinear "adaptive algorithm", and the "optimal algorithm" that incorporates human vestibular models. A novel approach to motion cueing, the "nonlinear algorithm" is introduced that combines features from both approaches. This algorithm is formulated by optimal control, and incorporates a new integrated perception model that includes both visual and vestibular sensation and the interaction between the stimuli. Using a time-varying control law, the matrix Riccati equation is updated in real time by a neurocomputing approach. Preliminary pilot testing resulted in the optimal algorithm incorporating a new otolith model, producing improved motion cues. The nonlinear algorithm vertical mode produced a motion cue with a time-varying washout, sustaining small cues for longer durations and washing out large cues more quickly compared to the optimal algorithm. The inclusion of the integrated perception model improved the responses to longitudinal and lateral cues. False cues observed with the NASA adaptive algorithm were absent. The neurocomputing approach was crucial in that the number of presentations of an input vector could be reduced to meet the real time requirement without degrading the quality of the motion cues.

  19. Individual differences in visual motion perception and neurotransmitter concentrations in the human brain.

    PubMed

    Takeuchi, Tatsuto; Yoshimoto, Sanae; Shimada, Yasuhiro; Kochiyama, Takanori; Kondo, Hirohito M

    2017-02-19

    Recent studies have shown that interindividual variability can be a rich source of information regarding the mechanism of human visual perception. In this study, we examined the mechanisms underlying interindividual variability in the perception of visual motion, one of the fundamental components of visual scene analysis, by measuring neurotransmitter concentrations using magnetic resonance spectroscopy. First, by psychophysically examining two types of motion phenomena-motion assimilation and contrast-we found that, following the presentation of the same stimulus, some participants perceived motion assimilation, while others perceived motion contrast. Furthermore, we found that the concentration of the excitatory neurotransmitter glutamate-glutamine (Glx) in the dorsolateral prefrontal cortex (Brodmann area 46) was positively correlated with the participant's tendency to motion assimilation over motion contrast; however, this effect was not observed in the visual areas. The concentration of the inhibitory neurotransmitter γ-aminobutyric acid had only a weak effect compared with that of Glx. We conclude that excitatory process in the suprasensory area is important for an individual's tendency to determine antagonistically perceived visual motion phenomena.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  20. Causal evidence for retina dependent and independent visual motion computations in mouse cortex

    PubMed Central

    Hillier, Daniel; Fiscella, Michele; Drinnenberg, Antonia; Trenholm, Stuart; Rompani, Santiago B.; Raics, Zoltan; Katona, Gergely; Juettner, Josephine; Hierlemann, Andreas; Rozsa, Balazs; Roska, Botond

    2017-01-01

    How neuronal computations in the sensory periphery contribute to computations in the cortex is not well understood. We examined this question in the context of visual-motion processing in the retina and primary visual cortex (V1) of mice. We disrupted retinal direction selectivity – either exclusively along the horizontal axis using FRMD7 mutants or along all directions by ablating starburst amacrine cells – and monitored neuronal activity in layer 2/3 of V1 during stimulation with visual motion. In control mice, we found an overrepresentation of cortical cells preferring posterior visual motion, the dominant motion direction an animal experiences when it moves forward. In mice with disrupted retinal direction selectivity, the overrepresentation of posterior-motion-preferring cortical cells disappeared, and their response at higher stimulus speeds was reduced. This work reveals the existence of two functionally distinct, sensory-periphery-dependent and -independent computations of visual motion in the cortex. PMID:28530661

  1. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  2. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Low cognitive load and reduced arousal impede practice effects on executive functioning, metacognitive confidence and decision making.

    PubMed

    Jackson, Simon A; Kleitman, Sabina; Aidman, Eugene

    2014-01-01

    The present study investigated the effects of low cognitive workload and the absence of arousal induced via external physical stimulation (motion) on practice-related improvements in executive (inhibitory) control, short-term memory, metacognitive monitoring and decision making. A total of 70 office workers performed low and moderately engaging passenger tasks in two successive 20-minute simulated drives and repeated a battery of decision making and inhibitory control tests three times—before, between and after these drives. For half the participants, visual simulation was synchronised with (moderately arousing) motion generated through LAnd Motion Platform, with vibration levels corresponding to a well-maintained unsealed road. The other half performed the same simulated drive without motion. Participants' performance significantly improved over the three test blocks, which is indicative of typical practice effects. The magnitude of these improvements was the highest when both motion and moderate cognitive load were present. The same effects declined either in the absence of motion (low arousal) or following a low cognitive workload task, thus suggesting two distinct pathways through which practice-related improvements in cognitive performance may be hampered. Practice, however, degraded certain aspects of metacognitive performance, as participants became less likely to detect incorrect decisions in the decision-making test with each subsequent test block. Implications include consideration of low cognitive load and arousal as factors responsible for performance decline and targets for the development of interventions/strategies in low load/arousal conditions such as autonomous vehicle operations and highway driving.

  4. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  5. Psychophysical scaling of circular vection (CV) produced by optokinetic (OKN) motion: individual differences and effects of practice.

    PubMed

    Kennedy, R S; Hettinger, L J; Harm, D L; Ordy, J M; Dunlap, W P

    1996-01-01

    Vection (V) refers to the compelling visual illusion of self-motion experienced by stationary individuals when viewing moving visual surrounds. The phenomenon is of theoretical interest because of its relevance for understanding the neural basis of ordinary self-motion perception, and of practical importance because it is the experience that makes simulation, virtual reality displays, and entertainment devices more vicarious. This experiment was performed to address whether an optokinetically induced vection illusion exhibits monotonic and stable psychometric properties and whether individuals differ reliably in these (V) perceptions. Subjects were exposed to varying velocities of the circular vection (CV) display in an optokinetic (OKN) drum 2 meters in diameter in 5 one-hour daily sessions extending over a 1 week period. For grouped data, psychophysical scalings of velocity estimates showed that exponents in a Stevens' type power function were essentially linear (slope = 0.95) and largely stable over sessions. Latencies were slightly longer for the slowest and fastest induction stimuli, and the trend over sessions for average latency was longer as a function of practice implying time course adaptation effects. Test-retest reliabilities for individual slope and intercept measures were moderately strong (r = 0.45) and showed no evidence of superdiagonal form. This implies stability of the individual circularvection (CV) sensitivities. Because the individual CV scores were stable, reliabilities were improved by averaging 4 sessions in order to provide a stronger retest reliability (r = 0.80). Individual latency responses were highly reliable (r = 0.80). Mean CV latency and motion sickness symptoms were greater in males than in females. These individual differences in CV could be predictive of other outcomes, such as susceptibility to disorientation or motion sickness, and for CNS localization of visual-vestibular interactions in the experience of self-motion.

  6. A comprehensive computational model of sound transmission through the porcine lung

    PubMed Central

    Dai, Zoujun; Peng, Ying; Henry, Brian M.; Mansy, Hansen A.; Sandler, Richard H.; Royston, Thomas J.

    2014-01-01

    A comprehensive computational simulation model of sound transmission through the porcine lung is introduced and experimentally evaluated. This “subject-specific” model utilizes parenchymal and major airway geometry derived from x-ray CT images. The lung parenchyma is modeled as a poroviscoelastic material using Biot theory. A finite element (FE) mesh of the lung that includes airway detail is created and used in comsol FE software to simulate the vibroacoustic response of the lung to sound input at the trachea. The FE simulation model is validated by comparing simulation results to experimental measurements using scanning laser Doppler vibrometry on the surface of an excised, preserved lung. The FE model can also be used to calculate and visualize vibroacoustic pressure and motion inside the lung and its airways caused by the acoustic input. The effect of diffuse lung fibrosis and of a local tumor on the lung acoustic response is simulated and visualized using the FE model. In the future, this type of visualization can be compared and matched with experimentally obtained elastographic images to better quantify regional lung material properties to noninvasively diagnose and stage disease and response to treatment. PMID:25190415

  7. A comprehensive computational model of sound transmission through the porcine lung.

    PubMed

    Dai, Zoujun; Peng, Ying; Henry, Brian M; Mansy, Hansen A; Sandler, Richard H; Royston, Thomas J

    2014-09-01

    A comprehensive computational simulation model of sound transmission through the porcine lung is introduced and experimentally evaluated. This "subject-specific" model utilizes parenchymal and major airway geometry derived from x-ray CT images. The lung parenchyma is modeled as a poroviscoelastic material using Biot theory. A finite element (FE) mesh of the lung that includes airway detail is created and used in comsol FE software to simulate the vibroacoustic response of the lung to sound input at the trachea. The FE simulation model is validated by comparing simulation results to experimental measurements using scanning laser Doppler vibrometry on the surface of an excised, preserved lung. The FE model can also be used to calculate and visualize vibroacoustic pressure and motion inside the lung and its airways caused by the acoustic input. The effect of diffuse lung fibrosis and of a local tumor on the lung acoustic response is simulated and visualized using the FE model. In the future, this type of visualization can be compared and matched with experimentally obtained elastographic images to better quantify regional lung material properties to noninvasively diagnose and stage disease and response to treatment.

  8. Assessment of simulation fidelity using measurements of piloting technique in flight. II

    NASA Technical Reports Server (NTRS)

    Ferguson, S. W.; Clement, W. F.; Hoh, R. H.; Cleveland, W. B.

    1985-01-01

    Two components of the Vertical Motion Simulator (presently being used to assess the fidelity of UH-60A simulation) are evaluated: (1) the dash/quickstop Nap-of-the-earth (NOE) piloting task, and (2) the bop-up task. Data from these two flight test experiments are presented which provide information on the effect of reduced visual field of view, variation in scene content and texture, and the affect of pure time delay in the closed-loop pilot response. In comparison with task performance results obtained in flight tests, the results from the simulation indicate that the pilot's NOE task performance in the simulator is significantly degraded.

  9. Helicopter simulation validation using flight data

    NASA Technical Reports Server (NTRS)

    Key, D. L.; Hansen, R. S.; Cleveland, W. B.; Abbott, W. Y.

    1982-01-01

    A joint NASA/Army effort to perform a systematic ground-based piloted simulation validation assessment is described. The best available mathematical model for the subject helicopter (UH-60A Black Hawk) was programmed for real-time operation. Flight data were obtained to validate the math model, and to develop models for the pilot control strategy while performing mission-type tasks. The validated math model is to be combined with motion and visual systems to perform ground based simulation. Comparisons of the control strategy obtained in flight with that obtained on the simulator are to be used as the basis for assessing the fidelity of the results obtained in the simulator.

  10. An inventory of aeronautical ground research facilities. Volume 4: Engineering flight simulation facilities

    NASA Technical Reports Server (NTRS)

    Pirrello, C. J.; Hardin, R. D.; Capelluro, L. P.; Harrison, W. D.

    1971-01-01

    The general purpose capabilities of government and industry in the area of real time engineering flight simulation are discussed. The information covers computer equipment, visual systems, crew stations, and motion systems, along with brief statements of facility capabilities. Facility construction and typical operational costs are included where available. The facilities provide for economical and safe solutions to vehicle design, performance, control, and flying qualities problems of manned and unmanned flight systems.

  11. Three-Dimensional Displays In The Future Flight Station

    NASA Astrophysics Data System (ADS)

    Bridges, Alan L.

    1984-10-01

    This review paper summarizes the development and applications of computer techniques for the representation of three-dimensional data in the future flight station. It covers the development of the Lockheed-NASA Advanced Concepts Flight Station (ACFS) research simulators. These simulators contain: A Pilot's Desk Flight Station (PDFS) with five 13- inch diagonal, color, cathode ray tubes on the main instrument panel; a computer-generated day and night visual system; a six-degree-of-freedom motion base; and a computer complex. This paper reviews current research, development, and evaluation of easily modifiable display systems and software requirements for three-dimensional displays that may be developed for the PDFS. This includes the analysis and development of a 3-D representation of the entire flight profile. This 3-D flight path, or "Highway-in-the-Sky", will utilize motion and perspective cues to tightly couple the human responses of the pilot to the aircraft control systems. The use of custom logic, e.g., graphics engines, may provide the processing power and architecture required for 3-D computer-generated imagery (CGI) or visual scene simulation (VSS). Diffraction or holographic head-up displays (HUDs) will also be integrated into the ACFS simulator to permit research on the requirements and use of these "out-the-window" projection systems. Future research may include the retrieval of high-resolution, perspective view terrain maps which could then be overlaid with current weather information or other selectable cultural features.

  12. Investigation of visually induced motion sickness in dynamic 3D contents based on subjective judgment, heart rate variability, and depth gaze behavior.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2014-01-01

    Visually induced motion sickness (VIMS) is an important safety issue in stereoscopic 3D technology. Accompanying subjective judgment of VIMS with objective measurement is useful to identify not only biomedical effects of dynamic 3D contents, but also provoking scenes that induce VIMS, duration of VIMS, and user behavior during VIMS. Heart rate variability and depth gaze behavior are appropriate physiological indicators for such objective observation. However, there is no information about relationship between subjective judgment of VIMS, heart rate variability, and depth gaze behavior. In this paper, we present a novel investigation of VIMS based on simulator sickness questionnaire (SSQ), electrocardiography (ECG), and 3D gaze tracking. Statistical analysis on SSQ data shows that nausea and disorientation symptoms increase as amount of dynamic motions increases (nausea: p<;0.005; disorientation: p<;0.05). To reduce VIMS, SSQ and ECG data suggest that user should perform voluntary gaze fixation at one point when experiencing vertical motion (up or down) and horizontal motion (turn left and right) in dynamic 3D contents. Observation of 3D gaze tracking data reveals that users who experienced VIMS tended to have unstable depth gaze than ones who did not experience VIMS.

  13. Perception of Visual Speed While Moving

    ERIC Educational Resources Information Center

    Durgin, Frank H.; Gigone, Krista; Scott, Rebecca

    2005-01-01

    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone…

  14. Motion Estimation Using the Single-row Superposition-type Planar Compound-like Eye

    PubMed Central

    Cheng, Chi-Cheng; Lin, Gwo-Long

    2007-01-01

    How can the compound eye of insects capture the prey so accurately and quickly? This interesting issue is explored from the perspective of computer vision instead of from the viewpoint of biology. The focus is on performance evaluation of noise immunity for motion recovery using the single-row superposition-type planar compound like eye (SPCE). The SPCE owns a special symmetrical framework with tremendous amount of ommatidia inspired by compound eye of insects. The noise simulates possible ambiguity of image patterns caused by either environmental uncertainty or low resolution of CCD devices. Results of extensive simulations indicate that this special visual configuration provides excellent motion estimation performance regardless of the magnitude of the noise. Even when the noise interference is serious, the SPCE is able to dramatically reduce errors of motion recovery of the ego-translation without any type of filters. In other words, symmetrical, regular, and multiple vision sensing devices of the compound-like eye have statistical averaging advantage to suppress possible noises. This discovery lays the basic foundation in terms of engineering approaches for the secret of the compound eye of insects.

  15. Motion Cueing Algorithm Development: New Motion Cueing Program Implementation and Tuning

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    A computer program has been developed for the purpose of driving the NASA Langley Research Center Visual Motion Simulator (VMS). This program includes two new motion cueing algorithms, the optimal algorithm and the nonlinear algorithm. A general description of the program is given along with a description and flowcharts for each cueing algorithm, and also descriptions and flowcharts for subroutines used with the algorithms. Common block variable listings and a program listing are also provided. The new cueing algorithms have a nonlinear gain algorithm implemented that scales each aircraft degree-of-freedom input with a third-order polynomial. A description of the nonlinear gain algorithm is given along with past tuning experience and procedures for tuning the gain coefficient sets for each degree-of-freedom to produce the desired piloted performance. This algorithm tuning will be needed when the nonlinear motion cueing algorithm is implemented on a new motion system in the Cockpit Motion Facility (CMF) at the NASA Langley Research Center.

  16. Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness

    PubMed Central

    Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.

    2014-01-01

    Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752

  17. Motion-related resource allocation in dynamic wireless visual sensor network environments.

    PubMed

    Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E

    2014-01-01

    This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.

  18. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  19. Neural Representation of Motion-In-Depth in Area MT

    PubMed Central

    Sanada, Takahisa M.

    2014-01-01

    Neural processing of 2D visual motion has been studied extensively, but relatively little is known about how visual cortical neurons represent visual motion trajectories that include a component toward or away from the observer (motion in depth). Psychophysical studies have demonstrated that humans perceive motion in depth based on both changes in binocular disparity over time (CD cue) and interocular velocity differences (IOVD cue). However, evidence for neurons that represent motion in depth has been limited, especially in primates, and it is unknown whether such neurons make use of CD or IOVD cues. We show that approximately one-half of neurons in macaque area MT are selective for the direction of motion in depth, and that this selectivity is driven primarily by IOVD cues, with a small contribution from the CD cue. Our results establish that area MT, a central hub of the primate visual motion processing system, contains a 3D representation of visual motion. PMID:25411481

  20. Biomechanical testing simulation of a cadaver spine specimen: development and evaluation study.

    PubMed

    Ahn, Hyung Soo; DiAngelo, Denis J

    2007-05-15

    This article describes a computer model of the cadaver cervical spine specimen and virtual biomechanical testing. To develop a graphics-oriented, multibody model of a cadaver cervical spine and to build a virtual laboratory simulator for the biomechanical testing using physics-based dynamic simulation techniques. Physics-based computer simulations apply the laws of physics to solid bodies with defined material properties. This technique can be used to create a virtual simulator for the biomechanical testing of a human cadaver spine. An accurate virtual model and simulation would complement tissue-based in vitro studies by providing a consistent test bed with minimal variability and by reducing cost. The geometry of cervical vertebrae was created from computed tomography images. Joints linking adjacent vertebrae were modeled as a triple-joint complex, comprised of intervertebral disc joints in the anterior region, 2 facet joints in the posterior region, and the surrounding ligament structure. A virtual laboratory simulation of an in vitro testing protocol was performed to evaluate the model responses during flexion, extension, and lateral bending. For kinematic evaluation, the rotation of motion segment unit, coupling behaviors, and 3-dimensional helical axes of motion were analyzed. The simulation results were in correlation with the findings of in vitro tests and published data. For kinetic evaluation, the forces of the intervertebral discs and facet joints of each segment were determined and visually animated. This methodology produced a realistic visualization of in vitro experiment, and allowed for the analyses of the kinematics and kinetics of the cadaver cervical spine. With graphical illustrations and animation features, this modeling technique has provided vivid and intuitive information.

  1. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  2. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  3. Space flight visual simulation.

    PubMed

    Xu, L

    1985-01-01

    In this paper, based on the scenes of stars seen by astronauts in their orbital flights, we have studied the mathematical model which must be constructed for CGI system to realize the space flight visual simulation. Considering such factors as the revolution and rotation of the Earth, exact date, time and site of orbital injection of the spacecraft, as well as its orbital flight and attitude motion, etc., we first defined all the instantaneous lines of sight and visual fields of astronauts in space. Then, through a series of coordinate transforms, the pictures of the scenes of stars changing with time-space were photographed one by one mathematically. In the procedure, we have designed a method of three-times "mathematical cutting." Finally, we obtained each instantaneous picture of the scenes of stars observed by astronauts through the window of the cockpit. Also, the dynamic conditions shaded by the Earth in the varying pictures of scenes of stars could be displayed.

  4. Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation.

    PubMed

    Kiryu, Tohru; So, Richard H Y

    2007-09-25

    Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution.

  5. Sensation of presence and cybersickness in applications of virtual reality for advanced rehabilitation

    PubMed Central

    Kiryu, Tohru; So, Richard HY

    2007-01-01

    Around three years ago, in the special issue on augmented and virtual reality in rehabilitation, the topics of simulator sickness was briefly discussed in relation to vestibular rehabilitation. Simulator sickness with virtual reality applications have also been referred to as visually induced motion sickness or cybersickness. Recently, study on cybersickness has been reported in entertainment, training, game, and medical environment in several journals. Virtual stimuli can enlarge sensation of presence, but they sometimes also evoke unpleasant sensation. In order to safely apply augmented and virtual reality for long-term rehabilitation treatment, sensation of presence and cybersickness should be appropriately controlled. This issue presents the results of five studies conducted to evaluate visually-induced effects and speculate influences of virtual rehabilitation. In particular, the influence of visual and vestibular stimuli on cardiovascular responses are reported in terms of academic contribution. PMID:17894857

  6. Design and analysis of multihypothesis motion-compensated prediction (MHMCP) codec for error-resilient visual communications

    NASA Astrophysics Data System (ADS)

    Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay

    2004-10-01

    A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.

  7. Dark focus of accommodation as dependent and independent variables in visual display technology

    NASA Technical Reports Server (NTRS)

    Jones, Sherrie; Kennedy, Robert; Harm, Deborah

    1992-01-01

    When independent stimuli are available for accommodation, as in the dark or under low contrast conditions, the lens seeks its resting position. Individual differences in resting positions are reliable, under autonomic control, and can change with visual task demands. We hypothesized that motion sickness in a flight simulator might result in dark focus changes. Method: Subjects received training flights in three different Navy flight simulators. Two were helicopter simulators entailed CRT presentation using infinity optics, one involved a dome presentation of a computer graphic visual projection system. Results: In all three experiments there were significant differences between dark focus activity before and after simulator exposure when comparisons were made between sick and not-sick pilot subjects. In two of these experiments, the average shift in dark focus for the sick subjects was toward increased myopia when each subject was compared to his own baseline. In the third experiment, the group showed an average shift outward of small amount and the subjects who were sick showed significantly less outward movement than those who were symptom free. Conclusions: Although the relationship is not a simple one, dark focus changes in simulator sickness imply parasympathetic activity. Because changes can occur in relation to endogenous and exogenous events, such measurement may have useful applications as dependent measures in studies of visually coupled systems, virtual reality systems, and space adaptation syndrome.

  8. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  10. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  11. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  12. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish.

    PubMed

    Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi

    2018-06-05

    Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.

  13. Applications of pilot scanning behavior to integrated display research

    NASA Technical Reports Server (NTRS)

    Waller, M. C.

    1977-01-01

    The oculometer is an electrooptical device designed to measure pilot scanning behavior during instrument approaches and landing operations. An overview of some results from a simulation study is presented to illustrate how information from the oculometer installed in a visual motion simulator, combined with measures of performance and control input data, can provide insight into the behavior and tactics of individual pilots during instrument approaches. Differences in measured behavior of the pilot subjects are pointed out; these differences become apparent in the way the pilots distribute their visual attention, in the amount of control activity, and in selected performance measures. Some of these measured differences have diagnostic implications, suggesting the use of the oculometer along with performance measures as a pilot training tool.

  14. Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.

    2017-01-01

    Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP. SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons in area MT in a directionally specific manner to generate selectivity for depth sign from motion parallax. We also show, through simulations, that depth could be estimated from a population of such gain-modulated neurons. PMID:28739582

  15. Visual/motion cue mismatch in a coordinated roll maneuver

    NASA Technical Reports Server (NTRS)

    Shirachi, D. K.; Shirley, R. S.

    1981-01-01

    The effects of bandwidth differences between visual and motion cueing systems on pilot performance for a coordinated roll task were investigated. Visual and motion cue configurations which were acceptable and the effects of reduced motion cue scaling on pilot performance were studied to determine the scale reduction threshold for which pilot performance was significantly different from full scale pilot performance. It is concluded that: (1) the presence or absence of high frequency error information in the visual and/or motion display systems significantly affects pilot performance; and (2) the attenuation of motion scaling while maintaining other display dynamic characteristics constant, affects pilot performance.

  16. Peripheral Processing Facilitates Optic Flow-Based Depth Perception

    PubMed Central

    Li, Jinglin; Lindemann, Jens P.; Egelhaaf, Martin

    2016-01-01

    Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions. PMID:27818631

  17. Aging effect in pattern, motion and cognitive visual evoked potentials.

    PubMed

    Kuba, Miroslav; Kremláček, Jan; Langrová, Jana; Kubová, Zuzana; Szanyi, Jana; Vít, František

    2012-06-01

    An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Quantitative measurement of interocular suppression in anisometropic amblyopia: a case-control study.

    PubMed

    Li, Jinrong; Hess, Robert F; Chan, Lily Y L; Deng, Daming; Yang, Xiao; Chen, Xiang; Yu, Minbin; Thompson, Benjamin

    2013-08-01

    The aims of this study were to assess (1) the relationship between interocular suppression and visual function in patients with anisometropic amblyopia, (2) whether suppression can be simulated in matched controls using monocular defocus or neutral density filters, (3) the effects of spectacle or rigid gas-permeable contact lens correction on suppression in patients with anisometropic amblyopia, and (4) the relationship between interocular suppression and outcomes of occlusion therapy. Case-control study (aims 1-3) and cohort study (aim 4). Forty-five participants with anisometropic amblyopia and 45 matched controls (mean age, 8.8 years for both groups). Interocular suppression was assessed using Bagolini striated lenses, neutral density filters, and an objective psychophysical technique that measures the amount of contrast imbalance between the 2 eyes that is required to overcome suppression (dichoptic motion coherence thresholds). Visual acuity was assessed using a logarithm minimum angle of resolution tumbling E chart and stereopsis using the Randot preschool test. Interocular suppression assessed using dichoptic motion coherence thresholds. Patients exhibited significantly stronger suppression than controls, and stronger suppression was correlated significantly with poorer visual acuity in amblyopic eyes. Reducing monocular acuity in controls to match that of cases using neutral density filters (luminance reduction) resulted in levels of interocular suppression comparable with that in patients. This was not the case for monocular defocus (optical blur). Rigid gas-permeable contact lens correction resulted in less suppression than spectacle correction, and stronger suppression was associated with poorer outcomes after occlusion therapy. Interocular suppression plays a key role in the visual deficits associated with anisometropic amblyopia and can be simulated in controls by inducing a luminance difference between the eyes. Accurate quantification of suppression using the dichoptic motion coherence threshold technique may provide useful information for the management and treatment of anisometropic amblyopia. Proprietary or commercial disclosure may be found after the references. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  19. Effects of Simulated Surface Effect Ship Motions on Crew Habitability. Phase II. Volume 3. Visual-Motor Tasks and Subjective Evaluations

    DTIC Science & Technology

    1977-05-01

    simulated rmotions ; and detaiJl.s on the daily work/rest schedule, as well as the overall run ,schedule (Ref.20). * Volume 4, "Crew Cognitive Functions...the outset: 1) the very small sampling of well- motivated crewmen made it difficult to generalize the results to a wider population; and 2) the...a:; backups. Selection of primary crewmen was based on satisfactory task learning and motivation demonstrated during the training period, any minor

  20. An Investigation of Visual, Aural, Motion and Control Movement Cues.

    ERIC Educational Resources Information Center

    Matheny, W. G.; And Others

    A study was conducted to determine the ways in which multi-sensory cues can be simulated and effectively used in the training of pilots. Two analytical bases, one called the stimulus environment approach and the other an information array approach, are developed along with a cue taxonomy. Cues are postulated on the basis of information gained from…

  1. Visual motion detection and habitat preference in Anolis lizards.

    PubMed

    Steinberg, David S; Leal, Manuel

    2016-11-01

    The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.

  2. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion

    PubMed Central

    Medendorp, W. P.

    2015-01-01

    It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289

  3. Short-Term Solutions to Prevent Simulator-Induced Motion Sickness: Report of a Conference.

    DTIC Science & Technology

    1986-03-01

    information. We had a medical student that claimed he was practiced at self- hypnosis . I said, okay, I want you to show me you can relax, and whenever I give...displacements or frequencies that are not good for it, in an attempt to replicate reality as a trainer. Then you give the visual as veridical as you can...with the monkey, you’ll have to have a visual-vestibular conflict out of phase in order to produce sickness. You virtually have to. DR. KENNEDY: And so

  4. Change of temporal-order judgment of sounds during long-lasting exposure to large-field visual motion.

    PubMed

    Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki

    2008-01-01

    The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.

  5. Illusory visual motion stimulus elicits postural sway in migraine patients

    PubMed Central

    Imaizumi, Shu; Honma, Motoyasu; Hibino, Haruo; Koyama, Shinichi

    2015-01-01

    Although the perception of visual motion modulates postural control, it is unknown whether illusory visual motion elicits postural sway. The present study examined the effect of illusory motion on postural sway in patients with migraine, who tend to be sensitive to it. We measured postural sway for both migraine patients and controls while they viewed static visual stimuli with and without illusory motion. The participants’ postural sway was measured when they closed their eyes either immediately after (Experiment 1), or 30 s after (Experiment 2), viewing the stimuli. The patients swayed more than the controls when they closed their eyes immediately after viewing the illusory motion (Experiment 1), and they swayed less than the controls when they closed their eyes 30 s after viewing it (Experiment 2). These results suggest that static visual stimuli with illusory motion can induce postural sway that may last for at least 30 s in patients with migraine. PMID:25972832

  6. Software Aids Visualization of Computed Unsteady Flow

    NASA Technical Reports Server (NTRS)

    Kao, David; Kenwright, David

    2003-01-01

    Unsteady Flow Analysis Toolkit (UFAT) is a computer program that synthesizes motions of time-dependent flows represented by very large sets of data generated in computational fluid dynamics simulations. Prior to the development of UFAT, it was necessary to rely on static, single-snapshot depictions of time-dependent flows generated by flow-visualization software designed for steady flows. Whereas it typically takes weeks to analyze the results of a largescale unsteady-flow simulation by use of steady-flow visualization software, the analysis time is reduced to hours when UFAT is used. UFAT can be used to generate graphical objects of flow visualization results using multi-block curvilinear grids in the format of a previously developed NASA data-visualization program, PLOT3D. These graphical objects can be rendered using FAST, another popular flow visualization software developed at NASA. Flow-visualization techniques that can be exploited by use of UFAT include time-dependent tracking of particles, detection of vortex cores, extractions of stream ribbons and surfaces, and tetrahedral decomposition for optimal particle tracking. Unique computational features of UFAT include capabilities for automatic (batch) processing, restart, memory mapping, and parallel processing. These capabilities significantly reduce analysis time and storage requirements, relative to those of prior flow-visualization software. UFAT can be executed on a variety of supercomputers.

  7. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  8. New Predictive Filters for Compensating the Transport Delay on a Flight Simulator

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2004-01-01

    The problems of transport delay in a flight simulator, such as its sources and effects, are reviewed. Then their effects on a pilot-in-the-loop control system are investigated with simulations. Three current prominent delay compensators the lead/lag filter, McFarland filter, and the Sobiski/Cardullo filter were analyzed and compared. This paper introduces two novel delay compensation techniques an adaptive predictor using the Kalman estimator and a state space predictive filter using a reference aerodynamic model. Applications of these two new compensators on recorded data from the NASA Langley Research Center Visual Motion Simulator show that they achieve better compensation over the current ones.

  9. Human dynamic orientation model applied to motion simulation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Borah, J. D.

    1976-01-01

    The Ormsby model of dynamic orientation, in the form of a discrete time computer program was used to predict non-visually induced sensations during an idealized coordinated aircraft turn. To predict simulation fidelity, the Ormsby model was used to assign penalties for incorrect attitude and angular rate perceptions. It was determined that a three rotational degree of freedom simulation should remain faithful to attitude perception even at the expense of incorrect angular rate sensations. Implementing this strategy, a simulation profile for the idealized turn was designed for a Link GAT-1 trainer. A simple optokinetic display was added to improve the fidelity of roll rate sensations.

  10. Emulation of rocket trajectory based on a six degree of freedom model

    NASA Astrophysics Data System (ADS)

    Zhang, Wenpeng; Li, Fan; Wu, Zhong; Li, Rong

    2008-10-01

    In this paper, a 6-DOF motion mathematical model is discussed. It is consisted of body dynamics and kinematics block, aero dynamics block and atmosphere block. Based on Simulink, the whole rocket trajectory mathematical model is developed. In this model, dynamic system simulation becomes easy and visual. The method of modularization design gives more convenience to transplant. At last, relevant data is given to be validated by Monte Carlo means. Simulation results show that the flight trajectory of the rocket can be simulated preferably by means of this model, and it also supplies a necessary simulating tool for the development of control system.

  11. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  12. A methodology for the assessment of manned flight simulator fidelity

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.; Malsbury, Terry N.

    1989-01-01

    A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.

  13. Langley test highlights, 1982

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A 20 ft vertical spin tunnel, a 30 by 60 ft tunnel, a 7 by 10 ft high speed tunnel, a 4 by 7 meter tunnel, an 8 ft transonic pressure tunnel, a transonic dynamics tunnel, a 16 ft transonic tunnel, a national transonic facility, a 0.3 meter transonic cryogenic tunnel, a unitary plan wind tunnel, a hypersonic facilities complex, an 8 ft high temperature tunnel, an aircraft noise reduction lab, an avionics integration research lab, a DC9 full workload simulator, a transport simulator, a general aviation simulator, an advanced concepts simulator, a mission oriented terminal area simulation (MOTAS), a differential maneuvering simulator, a visual/motion simulator, a vehicle antenna test facility, an impact dynamics research facility, and a flight research facility are all reviewed.

  14. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    PubMed

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Choice-reaction time to visual motion with varied levels of simultaneous rotary motion

    NASA Technical Reports Server (NTRS)

    Clark, B.; Stewart, J. D.

    1974-01-01

    Twelve airline pilots were studied to determine the effects of whole-body rotation on choice-reaction time to the horizontal motion of a line on a cathode-ray tube. On each trial, one of five levels of visual acceleration and five corresponding proportions of rotary acceleration were presented simultaneously. Reaction time to the visual motion decreased with increasing levels of visual motion and increased with increasing proportions of rotary acceleration. The results conflict with general theories of facilitation during double stimulation but are consistent with neural-clock model of sensory interaction in choice-reaction time.

  16. Nonlinear Motion Cueing Algorithm: Filtering at Pilot Station and Development of the Nonlinear Optimal Filters for Pitch and Roll

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill B.; Cardullo, Frank M.

    2012-01-01

    Telban and Cardullo have developed and successfully implemented the non-linear optimal motion cueing algorithm at the Visual Motion Simulator (VMS) at the NASA Langley Research Center in 2005. The latest version of the non-linear algorithm performed filtering of motion cues in all degrees-of-freedom except for pitch and roll. This manuscript describes the development and implementation of the non-linear optimal motion cueing algorithm for the pitch and roll degrees of freedom. Presented results indicate improved cues in the specified channels as compared to the original design. To further advance motion cueing in general, this manuscript describes modifications to the existing algorithm, which allow for filtering at the location of the pilot's head as opposed to the centroid of the motion platform. The rational for such modification to the cueing algorithms is that the location of the pilot's vestibular system must be taken into account as opposed to the off-set of the centroid of the cockpit relative to the center of rotation alone. Results provided in this report suggest improved performance of the motion cueing algorithm.

  17. Low Cognitive Load and Reduced Arousal Impede Practice Effects on Executive Functioning, Metacognitive Confidence and Decision Making

    PubMed Central

    Jackson, Simon A.; Kleitman, Sabina; Aidman, Eugene

    2014-01-01

    The present study investigated the effects of low cognitive workload and the absence of arousal induced via external physical stimulation (motion) on practice-related improvements in executive (inhibitory) control, short-term memory, metacognitive monitoring and decision making. A total of 70 office workers performed low and moderately engaging passenger tasks in two successive 20-minute simulated drives and repeated a battery of decision making and inhibitory control tests three times – before, between and after these drives. For half the participants, visual simulation was synchronised with (moderately arousing) motion generated through LAnd Motion Platform, with vibration levels corresponding to a well-maintained unsealed road. The other half performed the same simulated drive without motion. Participants’ performance significantly improved over the three test blocks, which is indicative of typical practice effects. The magnitude of these improvements was the highest when both motion and moderate cognitive load were present. The same effects declined either in the absence of motion (low arousal) or following a low cognitive workload task, thus suggesting two distinct pathways through which practice-related improvements in cognitive performance may be hampered. Practice, however, degraded certain aspects of metacognitive performance, as participants became less likely to detect incorrect decisions in the decision-making test with each subsequent test block. Implications include consideration of low cognitive load and arousal as factors responsible for performance decline and targets for the development of interventions/strategies in low load/arousal conditions such as autonomous vehicle operations and highway driving. PMID:25549327

  18. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Enhanced vision flight deck technology for commercial aircraft low-visibility surface operations

    NASA Astrophysics Data System (ADS)

    Arthur, Jarvis J.; Norman, R. M.; Kramer, Lynda J.; Prinzel, Lawerence J.; Ellis, Kyle K.; Harrison, Stephanie J.; Comstock, J. R.

    2013-05-01

    NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) airfield during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and minification effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.

  20. Enhanced Vision Flight Deck Technology for Commercial Aircraft Low-Visibility Surface Operations

    NASA Technical Reports Server (NTRS)

    Arthur, Jarvis J., III; Norman, R. Michael; Kramer, Lynda J.; Prinzel, Lawrence J., III; Ellis, Kyle K. E.; Harrison, Stephanie J.; Comstock, J. Ray

    2013-01-01

    NASA Langley Research Center and the FAA collaborated in an effort to evaluate the effect of Enhanced Vision (EV) technology display in a commercial flight deck during low visibility surface operations. Surface operations were simulated at the Memphis, TN (FAA identifier: KMEM) air field during nighttime with 500 Runway Visual Range (RVR) in a high-fidelity, full-motion simulator. Ten commercial airline flight crews evaluated the efficacy of various EV display locations and parallax and mini cation effects. The research paper discusses qualitative and quantitative results of the simulation experiment, including the effect of EV display placement on visual attention, as measured by the use of non-obtrusive oculometry and pilot mental workload. The results demonstrated the potential of EV technology to enhance situation awareness which is dependent on the ease of access and location of the displays. Implications and future directions are discussed.

  1. The interaction of Bayesian priors and sensory data and its neural circuit implementation in visually-guided movement

    PubMed Central

    Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.

    2012-01-01

    Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286

  2. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  3. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness.

    PubMed

    Solimini, Angelo G

    2013-01-01

    The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.

  4. Illusory motion reversal is caused by rivalry, not by perceptual snapshots of the visual field.

    PubMed

    Kline, Keith; Holcombe, Alex O; Eagleman, David M

    2004-10-01

    In stroboscopic conditions--such as motion pictures--rotating objects may appear to rotate in the reverse direction due to under-sampling (aliasing). A seemingly similar phenomenon occurs in constant sunlight, which has been taken as evidence that the visual system processes discrete "snapshots" of the outside world. But if snapshots are indeed taken of the visual field, then when a rotating drum appears to transiently reverse direction, its mirror image should always appeared to reverse direction simultaneously. Contrary to this hypothesis, we found that when observers watched a rotating drum and its mirror image, almost all illusory motion reversals occurred for only one image at a time. This result indicates that the motion reversal illusion cannot be explained by snapshots of the visual field. The same result is found when the two images are presented within one visual hemifield, further ruling out the possibility that discrete sampling of the visual field occurs separately in each hemisphere. The frequency distribution of illusory reversal durations approximates a gamma distribution, suggesting perceptual rivalry as a better explanation for illusory motion reversal. After adaptation of motion detectors coding for the correct direction, the activity of motion-sensitive neurons coding for motion in the reverse direction may intermittently become dominant and drive the perception of motion.

  5. Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: an fMRI study of the impact of visual art.

    PubMed

    Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko

    2010-03-10

    The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.

  6. Usage of stereoscopic visualization in the learning contents of rotational motion.

    PubMed

    Matsuura, Shu

    2013-01-01

    Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.

  7. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    PubMed Central

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214

  8. Numerical simulation of human orientation perception during lunar landing

    NASA Astrophysics Data System (ADS)

    Clark, Torin K.; Young, Laurence R.; Stimpson, Alexander J.; Duda, Kevin R.; Oman, Charles M.

    2011-09-01

    In lunar landing it is necessary to select a suitable landing point and then control a stable descent to the surface. In manned landings, astronauts will play a critical role in monitoring systems and adjusting the descent trajectory through either supervisory control and landing point designations, or by direct manual control. For the astronauts to ensure vehicle performance and safety, they will have to accurately perceive vehicle orientation. A numerical model for human spatial orientation perception was simulated using input motions from lunar landing trajectories to predict the potential for misperceptions. Three representative trajectories were studied: an automated trajectory, a landing point designation trajectory, and a challenging manual control trajectory. These trajectories were studied under three cases with different cues activated in the model to study the importance of vestibular cues, visual cues, and the effect of the descent engine thruster creating dust blowback. The model predicts that spatial misperceptions are likely to occur as a result of the lunar landing motions, particularly with limited or incomplete visual cues. The powered descent acceleration profile creates a somatogravic illusion causing the astronauts to falsely perceive themselves and the vehicle as upright, even when the vehicle has a large pitch or roll angle. When visual pathways were activated within the model these illusions were mostly suppressed. Dust blowback, obscuring the visual scene out the window, was also found to create disorientation. These orientation illusions are likely to interfere with the astronauts' ability to effectively control the vehicle, potentially degrading performance and safety. Therefore suitable countermeasures, including disorientation training and advanced displays, are recommended.

  9. Multimodal Excitatory Interfaces with Automatic Content Classification

    NASA Astrophysics Data System (ADS)

    Williamson, John; Murray-Smith, Roderick

    We describe a non-visual interface for displaying data on mobile devices, based around active exploration: devices are shaken, revealing the contents rattling around inside. This combines sample-based contact sonification with event playback vibrotactile feedback for a rich and compelling display which produces an illusion much like balls rattling inside a box. Motion is sensed from accelerometers, directly linking the motions of the user to the feedback they receive in a tightly closed loop. The resulting interface requires no visual attention and can be operated blindly with a single hand: it is reactive rather than disruptive. This interaction style is applied to the display of an SMS inbox. We use language models to extract salient features from text messages automatically. The output of this classification process controls the timbre and physical dynamics of the simulated objects. The interface gives a rapid semantic overview of the contents of an inbox, without compromising privacy or interrupting the user.

  10. Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.

    PubMed

    Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas

    2017-06-01

    Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.

  11. Filling gaps in visual motion for target capture

    PubMed Central

    Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637

  12. Filling gaps in visual motion for target capture.

    PubMed

    Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco

    2015-01-01

    A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.

  13. A Nonlinear, Human-Centered Approach to Motion Cueing with a Neurocomputing Solver

    NASA Technical Reports Server (NTRS)

    Telban, Robert J.; Cardullo, Frank M.; Houck, Jacob A.

    2002-01-01

    This paper discusses the continuation of research into the development of new motion cueing algorithms first reported in 1999. In this earlier work, two viable approaches to motion cueing were identified: the coordinated adaptive washout algorithm or 'adaptive algorithm', and the 'optimal algorithm'. In this study, a novel approach to motion cueing is discussed that would combine features of both algorithms. The new algorithm is formulated as a linear optimal control problem, incorporating improved vestibular models and an integrated visual-vestibular motion perception model previously reported. A control law is generated from the motion platform states, resulting in a set of nonlinear cueing filters. The time-varying control law requires the matrix Riccati equation to be solved in real time. Therefore, in order to meet the real time requirement, a neurocomputing approach is used to solve this computationally challenging problem. Single degree-of-freedom responses for the nonlinear algorithm were generated and compared to the adaptive and optimal algorithms. Results for the heave mode show the nonlinear algorithm producing a motion cue with a time-varying washout, sustaining small cues for a longer duration and washing out larger cues more quickly. The addition of the optokinetic influence from the integrated perception model was shown to improve the response to a surge input, producing a specific force response with no steady-state washout. Improved cues are also observed for responses to a sway input. Yaw mode responses reveal that the nonlinear algorithm improves the motion cues by reducing the magnitude of negative cues. The effectiveness of the nonlinear algorithm as compared to the adaptive and linear optimal algorithms will be evaluated on a motion platform, the NASA Langley Research Center Visual Motion Simulator (VMS), and ultimately the Cockpit Motion Facility (CMF) with a series of pilot controlled maneuvers. A proposed experimental procedure is discussed. The results of this evaluation will be used to assess motion cueing performance.

  14. Susceptibility of cat and squirrel monkey to motion sickness induced by visual stimulation: Correlation with susceptibility to vestibular stimulation

    NASA Technical Reports Server (NTRS)

    Daunton, N. G.; Fox, R. A.; Crampton, G. H.

    1984-01-01

    Experiments in which the susceptibility of both cats and squirrel monkeys to motion sickness induced by visual stimulation are documented. In addition, it is shown that in both species those individual subjects most highly susceptible to sickness induced by passive motion are also those most likely to become motion sick from visual (optokinetic) stimulation alone.

  15. 14 CFR 141.41 - Flight simulators, flight training devices, and training aids.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... freedom of motion system; (4) Use a visual system that provides at least a 45-degree horizontal field of view and a 30-degree vertical field of view simultaneously for each pilot; and (5) Have been evaluated... aircraft, or set of aircraft, in an open flight deck area or in an enclosed cockpit, including the hardware...

  16. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    PubMed

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Perceptually tuned low-bit-rate video codec for ATM networks

    NASA Astrophysics Data System (ADS)

    Chou, Chun-Hsien

    1996-02-01

    In order to maintain high visual quality in transmitting low bit-rate video signals over asynchronous transfer mode (ATM) networks, a layered coding scheme that incorporates the human visual system (HVS), motion compensation (MC), and conditional replenishment (CR) is presented in this paper. An empirical perceptual model is proposed to estimate the spatio- temporal just-noticeable distortion (STJND) profile for each frame, by which perceptually important (PI) prediction-error signals can be located. Because of the limited channel capacity of the base layer, only coded data of motion vectors, the PI signals within a small strip of the prediction-error image and, if there are remaining bits, the PI signals outside the strip are transmitted by the cells of the base-layer channel. The rest of the coded data are transmitted by the second-layer cells which may be lost due to channel error or network congestion. Simulation results show that visual quality of the reconstructed CIF sequence is acceptable when the capacity of the base-layer channel is allocated with 2 multiplied by 64 kbps and the cells of the second layer are all lost.

  18. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  19. Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease

    PubMed Central

    Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.

    2013-01-01

    We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256

  20. Use of the alpha shape to quantify finite helical axis dispersion during simulated spine movements.

    PubMed

    McLachlin, Stewart D; Bailey, Christopher S; Dunning, Cynthia E

    2016-01-04

    In biomechanical studies examining joint kinematics the most common measurement is range of motion (ROM), yet other techniques, such as the finite helical axis (FHA), may elucidate the changes in the 3D motion pathology more effectively. One of the deficiencies with the FHA technique is in quantifying the axes generated throughout a motion sequence. This study attempted to solve this issue via a computational geometric technique known as the alpha shape, which bounds a set of point data within a closed boundary similar to a convex hull. The purpose of this study was to use the alpha shape as an additional tool to visualize and quantify FHA dispersion between intact and injured cadaveric spine movements and compare these changes to the gold-standard ROM measurements. Flexion-extension, axial rotation, and lateral bending were simulated with five C5-C6 motion segments using a spinal loading simulator and Optotrak motion tracking system. Specimens were first tested intact followed by a simulated injury model. ROM and the FHAs were calculated post-hoc, with alpha shapes and convex hulls generated from the anatomic planar intercept points of the FHAs. While both ROM and the boundary shape areas increased with injury (p<0.05), no consistent geometric trends in the alpha shape growth were identified. The alpha shape area was sensitive to the alpha value chosen and values examined below 2.5 created more than one closed boundary. Ultimately, the alpha shape presents as a useful technique to quantify sequences of joint kinematics described by scatter plots such as FHA intercept data. Copyright © 2015. Published by Elsevier Ltd.

  1. Nonlinear Site Response Validation Studies Using KIK-net Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Asimaki, D.; Shi, J.

    2014-12-01

    Earthquake simulations are nowadays producing realistic ground motion time-series in the range of engineering design applications. Of particular significance to engineers are simulations of near-field motions and large magnitude events, for which observations are scarce. With the engineering community slowly adopting the use of simulated ground motions, site response models need to be re-evaluated in terms of their capabilities and limitations to 'translate' the simulated time-series from rock surface output to structural analyses input. In this talk, we evaluate three one-dimensional site response models: linear viscoelastic, equivalent linear and nonlinear. We evaluate the performance of the models by comparing predictions to observations at 30 downhole stations of the Japanese network KIK-Net that have recorded several strong events, including the 2011 Tohoku earthquake. Velocity profiles are used as the only input to all models, while additional parameters such as quality factor, density and nonlinear dynamic soil properties are estimated from empirical correlations. We quantify the differences of ground surface predictions and observations in terms of both seismological and engineering intensity measures, including bias ratios of peak ground response and visual comparisons of elastic spectra, and inelastic to elastic deformation ratio for multiple ductility ratios. We observe that PGV/Vs,30 — as measure of strain— is a better predictor of site nonlinearity than PGA, and that incremental nonlinear analyses are necessary to produce reliable estimates of high-frequency ground motion components at soft sites. We finally discuss the implications of our findings on the parameterization of nonlinear amplification factors in GMPEs, and on the extensive use of equivalent linear analyses in probabilistic seismic hazard procedures.

  2. Motion Direction Biases and Decoding in Human Visual Cortex

    PubMed Central

    Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297

  3. Effects of attention and laterality on motion and orientation discrimination in deaf signers.

    PubMed

    Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R

    2013-06-01

    Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.

  4. Perceptual response and information pick-up strategies within a family of sports.

    PubMed

    Ida, Hirofumi; Fukuhara, Kazunobu; Ishii, Motonobu; Inoue, Tetsuri

    2013-02-01

    The purpose of this study was to determine whether and how the perceptual response of athletes differed depending on their sporting expertise. This was achieved by comparing the responses of tennis and soft tennis players. Twelve experienced tennis players and 12 experienced soft tennis players viewed computer graphic serve motions simulated by a motion perturbation technique, and then scaled their anticipatory judgments regarding the direction, speed, and spin of the ball on a visual analogue scale. Experiment 1 evaluated the player's judgments in response to test motions rendered with a complete polygon model. The results revealed significantly different anticipatory judgments between the player groups when an elbow rotation perturbation was applied to the test serve motion. Experiment 2 used spatially occluded models in order to investigate the effectiveness of local information in making anticipatory judgments. The results suggested that the isolation of visual information had less effect on the judgment of the tennis players than on that of the soft tennis players. In conclusion, the domain of sporting expertise, including those of closely related sports, cannot only differentiate the anticipatory judgment of a ball's future flight path, but also affect the utilization strategy for the local kinematic information. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  6. The influence of visual motion on interceptive actions and perception.

    PubMed

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  8. Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders

    PubMed Central

    Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole

    2015-01-01

    Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342

  9. Motion sickness increases functional connectivity between visual motion and nausea-associated brain regions.

    PubMed

    Toschi, Nicola; Kim, Jieun; Sclocco, Roberta; Duggento, Andrea; Barbieri, Riccardo; Kuo, Braden; Napadow, Vitaly

    2017-01-01

    The brain networks supporting nausea not yet understood. We previously found that while visual stimulation activated primary (V1) and extrastriate visual cortices (MT+/V5, coding for visual motion), increasing nausea was associated with increasing sustained activation in several brain areas, with significant co-activation for anterior insula (aIns) and mid-cingulate (MCC) cortices. Here, we hypothesized that motion sickness also alters functional connectivity between visual motion and previously identified nausea-processing brain regions. Subjects prone to motion sickness and controls completed a motion sickness provocation task during fMRI/ECG acquisition. We studied changes in connectivity between visual processing areas activated by the stimulus (MT+/V5, V1), right aIns and MCC when comparing rest (BASELINE) to peak nausea state (NAUSEA). Compared to BASELINE, NAUSEA reduced connectivity between right and left V1 and increased connectivity between right MT+/V5 and aIns and between left MT+/V5 and MCC. Additionally, the change in MT+/V5 to insula connectivity was significantly associated with a change in sympathovagal balance, assessed by heart rate variability analysis. No state-related connectivity changes were noted for the control group. Increased connectivity between a visual motion processing region and nausea/salience brain regions may reflect increased transfer of visual/vestibular mismatch information to brain regions supporting nausea perception and autonomic processing. We conclude that vection-induced nausea increases connectivity between nausea-processing regions and those activated by the nauseogenic stimulus. This enhanced low-frequency coupling may support continual, slowly evolving nausea perception and shifts toward sympathetic dominance. Disengaging this coupling may be a target for biobehavioral interventions aimed at reducing motion sickness severity. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  11. The relationship of global form and motion detection to reading fluency.

    PubMed

    Englund, Julia A; Palomares, Melanie

    2012-08-15

    Visual motion processing in typical and atypical readers has suggested aspects of reading and motion processing share a common cortical network rooted in dorsal visual areas. Few studies have examined the relationship between reading performance and visual form processing, which is mediated by ventral cortical areas. We investigated whether reading fluency correlates with coherent motion detection thresholds in typically developing children using random dot kinematograms. As a comparison, we also evaluated the correlation between reading fluency and static form detection thresholds. Results show that both dorsal and ventral visual functions correlated with components of reading fluency, but that they have different developmental characteristics. Motion coherence thresholds correlated with reading rate and accuracy, which both improved with chronological age. Interestingly, when controlling for non-verbal abilities and age, reading accuracy significantly correlated with thresholds for coherent form detection but not coherent motion detection in typically developing children. Dorsal visual functions that mediate motion coherence seem to be related maturation of broad cognitive functions including non-verbal abilities and reading fluency. However, ventral visual functions that mediate form coherence seem to be specifically related to accurate reading in typically developing children. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Studies of Pilot Control During Launching and Reentry of Space Vehicles, Utilizing the Human Centrifuge

    NASA Technical Reports Server (NTRS)

    Clark, Carl C.; Woodling, C. H.

    1959-01-01

    With the ever increasing complexity of airplanes and the nearness to reality of manned space vehicles the use of pilot-controlled flight simulators has become imperative. The state of the art in flight simulation has progressed well with the demand. Pilot-controlled flight simulators are finding increasing uses in aeromedical research, airplane and airplane systems design, and preflight training. At the present many flight simulators are in existence with various degrees of sophistication and sundry purposes. These vary from fixed base simulators where the pilot applies control inputs according to visual cues presented to him on an instrument display to moving base simulators where various combinations of angular and linear motions are added in an attempt to improve the flight simulation.

  13. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    DTIC Science & Technology

    2015-03-01

    SWIR Short Wave Infrared VisualSFM Visual Structure from Motion WPAFB Wright Patterson Air Force Base xi ON THE INTEGRATION OF MEDIUM WAVE INFRARED...Structure from Motion Visual Structure from Motion ( VisualSFM ) is an application that performs incremental SfM using images fed into it of a scene [20...too drastically in between frames. When this happens, VisualSFM will begin creating a new model with images that do not fit to the old one. These new

  14. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Effects of translational and rotational motions and display polarity on visual performance.

    PubMed

    Feng, Wen-Yang; Tseng, Feng-Yi; Chao, Chin-Jung; Lin, Chiuhsiang Joe

    2008-10-01

    This study investigated effects of both translational and rotational motion and display polarity on a visual identification task. Three different motion types--heave, roll, and pitch--were compared with the static (no motion) condition. The visual task was presented on two display polarities, black-on-white and white-on-black. The experiment was a 4 (motion conditions) x 2 (display polarities) within-subjects design with eight subjects (six men and two women; M age = 25.6 yr., SD = 3.2). The dependent variables used to assess the performance on the visual task were accuracy and reaction time. Motion environments, especially the roll condition, had statistically significant effects on the decrement of accuracy and reaction time. The display polarity was significant only in the static condition.

  16. Visual memory for moving scenes.

    PubMed

    DeLucia, Patricia R; Maldia, Maria M

    2006-02-01

    In the present study, memory for picture boundaries was measured with scenes that simulated self-motion along the depth axis. The results indicated that boundary extension (a distortion in memory for picture boundaries) occurred with moving scenes in the same manner as that reported previously for static scenes. Furthermore, motion affected memory for the boundaries but this effect of motion was not consistent with representational momentum of the self (memory being further forward in a motion trajectory than actually shown). We also found that memory for the final position of the depicted self in a moving scene was influenced by properties of the optical expansion pattern. The results are consistent with a conceptual framework in which the mechanisms that underlie boundary extension and representational momentum (a) process different information and (b) both contribute to the integration of successive views of a scene while the scene is changing.

  17. Computer simulation of on-orbit manned maneuvering unit operations

    NASA Technical Reports Server (NTRS)

    Stuart, G. M.; Garcia, K. D.

    1986-01-01

    Simulation of spacecraft on-orbit operations is discussed in reference to Martin Marietta's Space Operations Simulation laboratory's use of computer software models to drive a six-degree-of-freedom moving base carriage and two target gimbal systems. In particular, key simulation issues and related computer software models associated with providing real-time, man-in-the-loop simulations of the Manned Maneuvering Unit (MMU) are addressed with special attention given to how effectively these models and motion systems simulate the MMU's actual on-orbit operations. The weightless effects of the space environment require the development of entirely new devices for locomotion. Since the access to space is very limited, it is necessary to design, build, and test these new devices within the physical constraints of earth using simulators. The simulation method that is discussed here is the technique of using computer software models to drive a Moving Base Carriage (MBC) that is capable of providing simultaneous six-degree-of-freedom motions. This method, utilized at Martin Marietta's Space Operations Simulation (SOS) laboratory, provides the ability to simulate the operation of manned spacecraft, provides the pilot with proper three-dimensional visual cues, and allows training of on-orbit operations. The purpose here is to discuss significant MMU simulation issues, the related models that were developed in response to these issues and how effectively these models simulate the MMU's actual on-orbiter operations.

  18. Pilot Comments From the Boeing High Speed Research Aircraft, Cycle 3, Simulation Study of the Effects of Aeroservoelasticity (LaRC.3)

    NASA Technical Reports Server (NTRS)

    Bailey, Melvin L. (Editor)

    2000-01-01

    This is a compilation of pilot comments from the Boeing High Speed Research Aircraft, Cycle 3, simulation study (LaRC.3) of the effects of aeroservoelasticity, conducted from October to December 1997 at NASA Langley Research Center. This simulation study was conducted using the Visual Motion Simulator. The comments are from direct tape transcriptions and have been edited for spelling only. These comments were made on tape following the completion of each flight card, immediately after the pilot was satisfied with his practice and data recording runs. Six pilots were used in the evaluation and they are identified as pilots A through F.

  19. Visual motion combined with base of support width reveals variable field dependency in healthy young adults.

    PubMed

    Streepey, Jefferson W; Kenyon, Robert V; Keshner, Emily A

    2007-01-01

    We previously reported responses to induced postural instability in young healthy individuals viewing visual motion with a narrow (25 degrees in both directions) and wide (90 degrees and 55 degrees in the horizontal and vertical directions) field of view (FOV) as they stood on different sized blocks. Visual motion was achieved using an immersive virtual environment that moved realistically with head motion (natural motion) and translated sinusoidally at 0.1 Hz in the fore-aft direction (augmented motion). We observed that a subset of the subjects (steppers) could not maintain continuous stance on the smallest block when the virtual environment was in motion. We completed a posteriori analyses on the postural responses of the steppers and non-steppers that may inform us about the mechanisms underlying these differences in stability. We found that when viewing augmented motion with a wide FOV, there was a greater effect on the head and whole body center of mass and ankle angle root mean square (RMS) values of the steppers than of the non-steppers. FFT analyses revealed greater power at the frequency of the visual stimulus in the steppers compared to the non-steppers. Whole body COM time lags relative to the augmented visual scene revealed that the time-delay between the scene and the COM was significantly increased in the steppers. The increased responsiveness to visual information suggests a greater visual field-dependency of the steppers and suggests that the thresholds for shifting from a reliance on visual information to somatosensory information can differ even within a healthy population.

  20. Researcher's guide to the NASA Ames Flight Simulator for Advanced Aircraft (FSAA)

    NASA Technical Reports Server (NTRS)

    Sinacori, J. B.; Stapleford, R. L.; Jewell, W. F.; Lehman, J. M.

    1977-01-01

    Performance, limitations, supporting software, and current checkout and operating procedures are presented for the flight simulator, in terms useful to the researcher who intends to use it. Suggestions to help the researcher prepare the experimental plan are also given. The FSAA's central computer, cockpit, and visual and motion systems are addressed individually but their interaction is considered as well. Data required, available options, user responsibilities, and occupancy procedures are given in a form that facilitates the initial communication required with the NASA operations' group.

  1. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  2. 3D geospatial visualizations: Animation and motion effects on spatial objects

    NASA Astrophysics Data System (ADS)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  3. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  4. Using Dynamic Interface Modeling and Simulation to Develop a Launch and Recovery Flight Simulation for a UH-60A Blackhawk

    NASA Technical Reports Server (NTRS)

    Sweeney, Christopher; Bunnell, John; Chung, William; Giovannetti, Dean; Mikula, Julie; Nicholson, Bob; Roscoe, Mike

    2001-01-01

    Joint Shipboard Helicopter Integration Process (JSHIP) is a Joint Test and Evaluation (JT&E) program sponsored by the Office of the Secretary of Defense (OSD). Under the JSHDP program is a simulation effort referred to as the Dynamic Interface Modeling and Simulation System (DIMSS). The purpose of DIMSS is to develop and test the processes and mechanisms that facilitate ship-helicopter interface testing via man-in-the-loop ground-based flight simulators. Specifically, the DIMSS charter is to develop an accredited process for using a flight simulator to determine the wind-over-the-deck (WOD) launch and recovery flight envelope for the UH-60A ship/helicopter combination. DIMSS is a collaborative effort between the NASA Ames Research Center and OSD. OSD determines the T&E and warfighter training requirements, provides the programmatics and dynamic interface T&E experience, and conducts ship/aircraft interface tests for validating the simulation. NASA provides the research and development element, simulation facility, and simulation technical experience. This paper will highlight the benefits of the NASA/JSHIP collaboration and detail achievements of the project in terms of modeling and simulation. The Vertical Motion Simulator (VMS) at NASA Ames Research Center offers the capability to simulate a wide range of simulation cueing configurations, which include visual, aural, and body-force cueing devices. The system flexibility enables switching configurations io allow back-to-back evaluation and comparison of different levels of cueing fidelity in determining minimum training requirements. The investigation required development and integration of several major simulation system at the VMS. A new UH-60A BlackHawk interchangeable cab that provides an out-the-window (OTW) field-of-view (FOV) of 220 degrees in azimuth and 70 degrees in elevation was built. Modeling efforts involved integrating Computational Fluid Dynamics (CFD) generated data of an LHA ship airwake and integrating a real-time ship motion model developed based on a batch model from Naval Surface Warfare Center. Engineering development and integration of a three degrees-of-freedom (DOF) dynamic seat to simulate high frequency rotor-dynamics dependent motion cues for use in conjunction with the large motion system was accomplished. The development of an LHA visual model in several different levels of resolution and an aural cueing system in which three separate fidelity levels could be selected were developed. VMS also integrated a PC-based E&S simFUSION system to investigate cost effective IG alternatives. The DIMSS project consists of three phases that follow an approved Validation, Verification and accreditation (VV&A) process. The first phase will support the accreditation of the individual subsystems and models. The second will follow the verification and validation of the integrated subsystems and models, and will address fidelity requirements of the integrated models and subsystems. The third and final phase will allow the verification and validation of the full system integration. This VV&A process will address the utility of the simulated WOD launch and recovery envelope. Simulations supporting the first two stages have been completed and the data is currently being reviewed and analyzed.

  5. Motion-base simulator study of control of an externally blown flap STOL transport aircraft after failure of an outboard engine during landing approach

    NASA Technical Reports Server (NTRS)

    Middleton, D. B.; Hurt, G. J., Jr.; Bergeron, H. P.; Patton, J. M., Jr.; Deal, P. L.; Champine, R. A.

    1975-01-01

    A moving-base simulator investigation of the problems of recovery and landing of a STOL aircraft after failure of an outboard engine during final approach was made. The approaches were made at 75 knots along a 6 deg glide slope. The engine was failed at low altitude and the option to go around was not allowed. The aircraft was simulated with each of three control systems, and it had four high-bypass-ratio fan-jet engines exhausting against large triple-slotted wing flaps to produce additional lift. A virtual-image out-the-window television display of a simulated STOL airport was operating during part of the investigation. Also, a simple heads-up flight director display superimposed on the airport landing scene was used by the pilots to make some of the recoveries following an engine failure. The results of the study indicated that the variation in visual cues and/or motion cues had little effect on the outcome of a recovery, but they did have some effect on the pilot's response and control patterns.

  6. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  7. Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.

    PubMed

    Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko

    2008-01-01

    To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.

  8. Defective motion processing in children with cerebral visual impairment due to periventricular white matter damage.

    PubMed

    Weinstein, Joel M; Gilmore, Rick O; Shaikh, Sumera M; Kunselman, Allen R; Trescher, William V; Tashima, Lauren M; Boltz, Marianne E; McAuliffe, Matthew B; Cheung, Albert; Fesi, Jeremy D

    2012-07-01

    We sought to characterize visual motion processing in children with cerebral visual impairment (CVI) due to periventricular white matter damage caused by either hydrocephalus (eight individuals) or periventricular leukomalacia (PVL) associated with prematurity (11 individuals). Using steady-state visually evoked potentials (ssVEP), we measured cortical activity related to motion processing for two distinct types of visual stimuli: 'local' motion patterns thought to activate mainly primary visual cortex (V1), and 'global' or coherent patterns thought to activate higher cortical visual association areas (V3, V5, etc.). We studied three groups of children: (1) 19 children with CVI (mean age 9y 6mo [SD 3y 8mo]; 9 male; 10 female); (2) 40 neurologically and visually normal comparison children (mean age 9y 6mo [SD 3y 1mo]; 18 male; 22 female); and (3) because strabismus and amblyopia are common in children with CVI, a group of 41 children without neurological problems who had visual deficits due to amblyopia and/or strabismus (mean age 7y 8mo [SD 2y 8mo]; 28 male; 13 female). We found that the processing of global as opposed to local motion was preferentially impaired in individuals with CVI, especially for slower target velocities (p=0.028). Motion processing is impaired in children with CVI. ssVEP may provide useful and objective information about the development of higher visual function in children at risk for CVI. © The Authors. Journal compilation © Mac Keith Press 2011.

  9. Line-oriented flight training

    NASA Technical Reports Server (NTRS)

    Beach, B. E.

    1980-01-01

    Some of the concepts related to a line-oriented flight training program are discussed. The need to shift from training in manipulative skills to something closer to management skills is emphasized. The program is evaluated in terms of its realistic approaches which include the simulator's optimized motion and visual capabilities. The value of standard operating procedures as they affect the line pilot in everyday operations are also illustrated.

  10. Simulations of Carnival Rides and Rube Goldberg Machines for the Visualization of Concepts of Statics and Dynamics

    ERIC Educational Resources Information Center

    Howard, William; Williams, Richard; Yao, Jason

    2010-01-01

    Solid modeling is widely used as a teaching tool in summer activities with high school students. The addition of motion analysis allows concepts from statics and dynamics to be introduced to students in both qualitative and quantitative ways. Two sets of solid modeling projects--carnival rides and Rube Goldberg machines--are shown to allow the…

  11. Real-world applications of artificial neural networks to cardiac monitoring using radar and recent theoretical developments

    NASA Astrophysics Data System (ADS)

    Padgett, Mary Lou; Johnson, John L.; Vemuri, V. Rao

    1997-04-01

    This paper focuses on use of a new image filtering technique, Pulsed Coupled Neural Network factoring to enhance both the analysis and visual interpretation of noisy sinusoidal time signals, such as those produced by LLNL's Microwave Impulse Radar motion sensor. Separation of a slower, carrier wave from faster, finer detailed signals and from scattered noise is illustrated. The resulting images clearly illustrate the changes over time of simulated heart motion patterns. Such images can potentially assist a field medic in interpretation of the extent of combat injuries. These images can also be transmitted or stored and retrieved for later analysis.

  12. Visual Acuity Using Head-fixed Displays During Passive Self and Surround Motion

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Black, F. Owen; Stallings, Valerie; Peters, Brian

    2007-01-01

    The ability to read head-fixed displays on various motion platforms requires the suppression of vestibulo-ocular reflexes. This study examined dynamic visual acuity while viewing a head-fixed display during different self and surround rotation conditions. Twelve healthy subjects were asked to report the orientation of Landolt C optotypes presented on a micro-display fixed to a rotating chair at 50 cm distance. Acuity thresholds were determined by the lowest size at which the subjects correctly identified 3 of 5 optotype orientations at peak velocity. Visual acuity was compared across four different conditions, each tested at 0.05 and 0.4 Hz (peak amplitude of 57 deg/s). The four conditions included: subject rotated in semi-darkness (i.e., limited to background illumination of the display), subject stationary while visual scene rotated, subject rotated around a stationary visual background, and both subject and visual scene rotated together. Visual acuity performance was greatest when the subject rotated around a stationary visual background; i.e., when both vestibular and visual inputs provided concordant information about the motion. Visual acuity performance was most reduced when the subject and visual scene rotated together; i.e., when the visual scene provided discordant information about the motion. Ranges of 4-5 logMAR step sizes across the conditions indicated the acuity task was sufficient to discriminate visual performance levels. The background visual scene can influence the ability to read head-fixed displays during passive motion disturbances. Dynamic visual acuity using head-fixed displays can provide an operationally relevant screening tool for visual performance during exposure to novel acceleration environments.

  13. Estimation of bio-signal based on human motion for integrated visualization of daily-life.

    PubMed

    Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko

    2007-01-01

    This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.

  14. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    PubMed

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  15. Recovery of biological motion perception and network plasticity after cerebellar tumor removal.

    PubMed

    Sokolov, Arseny A; Erb, Michael; Grodd, Wolfgang; Tatagiba, Marcos S; Frackowiak, Richard S J; Pavlova, Marina A

    2014-10-01

    Visual perception of body motion is vital for everyday activities such as social interaction, motor learning or car driving. Tumors to the left lateral cerebellum impair visual perception of body motion. However, compensatory potential after cerebellar damage and underlying neural mechanisms remain unknown. In the present study, visual sensitivity to point-light body motion was psychophysically assessed in patient SL with dysplastic gangliocytoma (Lhermitte-Duclos disease) to the left cerebellum before and after neurosurgery, and in a group of healthy matched controls. Brain activity during processing of body motion was assessed by functional magnetic resonance imaging (MRI). Alterations in underlying cerebro-cerebellar circuitry were studied by psychophysiological interaction (PPI) analysis. Visual sensitivity to body motion in patient SL before neurosurgery was substantially lower than in controls, with significant improvement after neurosurgery. Functional MRI in patient SL revealed a similar pattern of cerebellar activation during biological motion processing as in healthy participants, but located more medially, in the left cerebellar lobules III and IX. As in normalcy, PPI analysis showed cerebellar communication with a region in the superior temporal sulcus, but located more anteriorly. The findings demonstrate a potential for recovery of visual body motion processing after cerebellar damage, likely mediated by topographic shifts within the corresponding cerebro-cerebellar circuitry induced by cerebellar reorganization. The outcome is of importance for further understanding of cerebellar plasticity and neural circuits underpinning visual social cognition.

  16. Implied motion language can influence visual spatial memory.

    PubMed

    Vinson, David W; Engelen, Jan; Zwaan, Rolf A; Matlock, Teenie; Dale, Rick

    2017-07-01

    How do language and vision interact? Specifically, what impact can language have on visual processing, especially related to spatial memory? What are typically considered errors in visual processing, such as remembering the location of an object to be farther along its motion trajectory than it actually is, can be explained as perceptual achievements that are driven by our ability to anticipate future events. In two experiments, we tested whether the prior presentation of motion language influences visual spatial memory in ways that afford greater perceptual prediction. Experiment 1 showed that motion language influenced judgments for the spatial memory of an object beyond the known effects of implied motion present in the image itself. Experiment 2 replicated this finding. Our findings support a theory of perception as prediction.

  17. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    PubMed

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Visual Features Involving Motion Seen from Airport Control Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion

    2010-01-01

    Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.

  19. Slow and fast visual motion channels have independent binocular-rivalry stages.

    PubMed Central

    van de Grind, W. A.; van Hof, P.; van der Smagt, M. J.; Verstraten, F. A.

    2001-01-01

    We have previously reported a transparent motion after-effect indicating that the human visual system comprises separate slow and fast motion channels. Here, we report that the presentation of a fast motion in one eye and a slow motion in the other eye does not result in binocular rivalry but in a clear percept of transparent motion. We call this new visual phenomenon 'dichoptic motion transparency' (DMT). So far only the DMT phenomenon and the two motion after-effects (the 'classical' motion after-effect, seen after motion adaptation on a static test pattern, and the dynamic motion after-effect, seen on a dynamic-noise test pattern) appear to isolate the channels completely. The speed ranges of the slow and fast channels overlap strongly and are observer dependent. A model is presented that links after-effect durations of an observer to the probability of rivalry or DMT as a function of dichoptic velocity combinations. Model results support the assumption of two highly independent channels showing only within-channel rivalry, and no rivalry or after-effect interactions between the channels. The finding of two independent motion vision channels, each with a separate rivalry stage and a private line to conscious perception, might be helpful in visualizing or analysing pathways to consciousness. PMID:11270442

  20. Visualization of Heart Sounds and Motion Using Multichannel Sensor

    NASA Astrophysics Data System (ADS)

    Nogata, Fumio; Yokota, Yasunari; Kawamura, Yoko

    2010-06-01

    As there are various difficulties associated with auscultation techniques, we have devised a technique for visualizing heart motion in order to assist in the understanding of heartbeat for both doctors and patients. Auscultatory sounds were first visualized using FFT and Wavelet analysis to visualize heart sounds. Next, to show global and simultaneous heart motions, a new technique for visualization was established. The visualization system consists of a 64-channel unit (63 acceleration sensors and one ECG sensor) and a signal/image analysis unit. The acceleration sensors were arranged in a square array (8×8) with a 20-mm pitch interval, which was adhered to the chest surface. The heart motion of one cycle was visualized at a sampling frequency of 3 kHz and quantization of 12 bits. The visualized results showed a typical waveform motion of the strong pressure shock due to closing tricuspid valve and mitral valve of the cardiac apex (first sound), and the closing aortic and pulmonic valve (second sound) in sequence. To overcome difficulties in auscultation, the system can be applied to the detection of heart disease and to the digital database management of the auscultation examination in medical areas.

  1. Recognition of tennis serve performed by a digital player: comparison among polygon, shadow, and stick-figure models.

    PubMed

    Ida, Hirofumi; Fukuhara, Kazunobu; Ishii, Motonobu

    2012-01-01

    The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure) were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster) of one of the joint rotations (wrist, elbow, or shoulder). Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.

  2. Visual gravitational motion and the vestibular system in humans

    PubMed Central

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-01-01

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761

  3. Visual gravitational motion and the vestibular system in humans.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  4. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. A novel visual-inertial monocular SLAM

    NASA Astrophysics Data System (ADS)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  6. Quantitative relation between server motion and receiver anticipation in tennis: implications of responses to computer-simulated motions.

    PubMed

    Ida, Hirofumi; Fukuhara, Kazunobu; Sawada, Misako; Ishii, Motonobu

    2011-01-01

    The purpose of this study was to determine the quantitative relationships between the server's motion and the receiver's anticipation using a computer graphic animation of tennis serves. The test motions were determined by capturing the motion of a model player and estimating the computational perturbations caused by modulating the rotation of the player's elbow and forearm joints. Eight experienced and eight novice players rated their anticipation of the speed, direction, and spin of the ball on a visual analogue scale. The experienced players significantly altered some of their anticipatory judgment depending on the percentage of both the forearm and elbow modulations, while the novice players indicated no significant changes. Multiple regression analyses, including that of the racket's kinematic parameters immediately before racket-ball impact as independent variables, showed that the experienced players demonstrated a higher coefficient of determination than the novice players in their anticipatory judgment of the ball direction. The results have implications on the understanding of the functional relation between a player's motion and the opponent's anticipatory judgment during real play.

  7. Possible applications of the LEAP motion controller for more interactive simulated experiments in augmented or virtual reality

    NASA Astrophysics Data System (ADS)

    Wozniak, Peter; Vauderwange, Oliver; Mandal, Avikarsha; Javahiraly, Nicolas; Curticapean, Dan

    2016-09-01

    Practical exercises are a crucial part of many curricula. Even simple exercises can improve the understanding of the underlying subject. Most experimental setups require special hardware. To carry out e. g. a lens experiments the students need access to an optical bench, various lenses, light sources, apertures and a screen. In our previous publication we demonstrated the use of augmented reality visualization techniques in order to let the students prepare with a simulated experimental setup. Within the context of our intended blended learning concept we want to utilize augmented or virtual reality techniques for stationary laboratory exercises. Unlike applications running on mobile devices, stationary setups can be extended more easily with additional interfaces and thus allow for more complex interactions and simulations in virtual reality (VR) and augmented reality (AR). The most significant difference is the possibility to allow interactions beyond touching a screen. The LEAP Motion controller is a small inexpensive device that allows for the tracking of the user's hands and fingers in three dimensions. It is conceivable to allow the user to interact with the simulation's virtual elements by the user's very hand position, movement and gesture. In this paper we evaluate possible applications of the LEAP Motion controller for simulated experiments in augmented and virtual reality. We pay particular attention to the devices strengths and weaknesses and want to point out useful and less useful application scenarios.

  8. Vestibular Activation Differentially Modulates Human Early Visual Cortex and V5/MT Excitability and Response Entropy

    PubMed Central

    Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada

    2013-01-01

    Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031

  9. Visualization of the collective vortex-like motions in liquid argon and water: Molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Anikeenko, A. V.; Malenkov, G. G.; Naberukhin, Yu. I.

    2018-03-01

    We propose a new measure of collectivity of molecular motion in the liquid: the average vector of displacement of the particles, ⟨ΔR⟩, which initially have been localized within a sphere of radius Rsph and then have executed the diffusive motion during a time interval Δt. The more correlated the motion of the particles is, the longer will be the vector ⟨ΔR⟩. We visualize the picture of collective motions in molecular dynamics (MD) models of liquids by constructing the ⟨ΔR⟩ vectors and pinning them to the sites of the uniform grid which divides each of the edges of the model box into equal parts. MD models of liquid argon and water have been studied by this method. Qualitatively, the patterns of ⟨ΔR⟩ vectors are similar for these two liquids but differ in minor details. The most important result of our research is the revealing of the aggregates of ⟨ΔR⟩ vectors which have the form of extended flows which sometimes look like the parts of vortices. These vortex-like clusters of ⟨ΔR⟩ vectors have the mesoscopic size (of the order of 10 nm) and persist for tens of picoseconds. Dependence of the ⟨ΔR⟩ vector field on parameters Rsph, Δt, and on the model size has been investigated. This field in the models of liquids differs essentially from that in a random-walk model.

  10. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    PubMed

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  11. Reproducible Simulation of Respiratory Motion in Porcine Lung Explants.

    PubMed

    Biederer, J; Plathow, C; Schoebinger, M; Tetzlaff, R; Puderbach, M; Bolte, H; Zaporozhan, J; Meinzer, H-P; Heller, M; Kauczor, H-U

    2006-11-01

    To develop a model for exactly reproducible respiration motion simulations of animal lung explants inside an MR-compatible chest phantom. The materials included a piston pump and a flexible silicone reconstruction of a porcine diaphragm and were used in combination with an established MR-compatible chest phantom for porcine heart-lung preparations. The rhythmic inflation and deflation of the diaphragm at the bottom of the artificial thorax with water (1 - 1.5 L) induced lung tissue displacement resembling diaphragmatic breathing. This system was tested on five porcine heart-lung preparations using 1.5T MRI with transverse and coronal 3D-GRE (TR/TE = 3.63/1.58, 256 x 256 matrix, 350 mm FOV, 4 mm slices) and half Fourier T2-FSE (TR/TE = 545/29, 256 x 192, 350 mm, 6 mm) as well as multiple row detector CT (16 x 1 mm collimation, pitch 1.5, FOV 400 mm, 120 mAs) acquired at five fixed inspiration levels. Dynamic CT scans and coronal MRI with dynamic 2D-GRE and 2D-SS-GRE sequences (image frequencies of 10/sec and 3/sec, respectively) were acquired during continuous "breathing" (7/minute). The position of the piston pump was visually correlated with the respiratory motion visible through the transparent wall of the phantom and with dynamic displays of CT and MR images. An elastic body splines analysis of the respiratory motion was performed using CT data. Visual evaluation of MRI and CT showed three-dimensional movement of the lung tissue throughout the respiration cycle. Local tissue displacement inside the lung explants was documented with motion maps calculated from CT. The maximum displacement at the top of the diaphragm (mean 26.26 [SD 1.9] mm on CT and 27.16 [SD 1.5] mm on MRI, respectively [p = 0.25; Wilcoxon test]) was in the range of tidal breathing in human patients. The chest phantom with a diaphragmatic pump is a promising platform for multi-modality imaging studies of the effects of respiratory lung motion.

  12. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  13. Sensory conflict in motion sickness: An observer theory approach

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.

    1989-01-01

    Motion sickness is the general term describing a group of common nausea syndromes originally attributed to motion-induced cerebral ischemia, stimulation of abdominal organ afferent, or overstimulation of the vestibular organs of the inner ear. Sea-, car-, and airsicknesses are the most commonly experienced examples. However, the discovery of other variants such as Cinerama-, flight simulator-, spectacle-, and space sickness in which the physical motion of the head and body is normal or absent has led to a succession of sensory conflict theories which offer a more comprehensive etiologic perspective. Implicit in the conflict theory is the hypothesis that neutral and/or humoral signals originate in regions of the brain subversing spatial orientation, and that these signals somehow traverse to other centers mediating sickness symptoms. Unfortunately, the present understanding of the neurophysiological basis of motion sickness is far from complete. No sensory conflict neuron or process has yet been physiologically identified. To what extent can the existing theory be reconciled with current knowledge of the physiology and pharmacology of nausea and vomiting. The stimuli which causes sickness, synthesizes a contemporary Observer Theory view of the Sensory Conflict hypothesis are reviewed, and a revised model for the dynamic coupling between the putative conflict signals and nausea magnitude estimates is presented. The use of quantitative models for sensory conflict offers a possible new approach to improving the design of visual and motion systems for flight simulators and other virtual environment display systems.

  14. Visual motion perception predicts driving hazard perception ability.

    PubMed

    Lacherez, Philippe; Au, Sandra; Wood, Joanne M

    2014-02-01

    To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  15. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development

    PubMed Central

    Chakraborty, Arijit; Anstice, Nicola S.; Jacobs, Robert J.; Paudel, Nabin; LaGasse, Linda L.; Lester, Barry M.; McKinlay, Christopher J. D.; Harding, Jane E.; Wouldes, Trecia A.; Thompson, Benjamin

    2017-01-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of gross motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. PMID:28435122

  16. TH-CD-207B-03: How to Quantify Temporal Resolution in X-Ray MDCT Imaging?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budde, A; GE Healthcare Technologies, Madison, WI; Li, Y

    Purpose: In modern CT scanners, a quantitative metric to assess temporal response, namely, to quantify the temporal resolution (TR), remains elusive. Rough surrogate metrics, such as half of the gantry rotation time for single source CT, a quarter of the gantry rotation time for dual source CT, or measurements of motion artifact’s size, shape, or intensity have previously been used. In this work, a rigorous framework which quantifies TR and a practical measurement method are developed. Methods: A motion phantom was simulated which consisted of a single rod that is in motion except during a static period at the temporalmore » center of the scan, termed the TR window. If the image of the motion scan has negligible motion artifacts compared to an image from a totally static scan, then the system has a TR no worse than the TR window used. By repeating this comparison with varying TR windows, the TR of the system can be accurately determined. Motion artifacts were also visually assessed and the TR was measured across varying rod motion speeds, directions, and locations. Noiseless fan beam acquisitions were simulated and images were reconstructed with a short-scan image reconstruction algorithm. Results: The size, shape, and intensity of motion artifacts varied when the rod speed, direction, or location changed. TR measured using the proposed method, however, was consistent across rod speeds, directions, and locations. Conclusion: Since motion artifacts vary depending upon the motion speed, direction, and location, they are not suitable for measuring TR. In this work, a CT system with a specified TR is defined as having the ability to produce a static image with negligible motion artifacts, no matter what motion occurs outside of a static window of width TR. This framework allows for practical measurement of temporal resolution in clinical CT imaging systems. Funding support: GE Healthcare; Conflict of Interest: Employee, GE Healthcare.« less

  17. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  18. Molecular dynamics of individual alpha-helices of bacteriorhodopsin in dimyristol phosphatidylocholine. I. Structure and dynamics.

    PubMed

    Woolf, T B

    1997-11-01

    Understanding the role of the lipid bilayer in membrane protein structure and dynamics is needed for tertiary structure determination methods. However, the molecular details are not well understood. Molecular dynamics computer calculations can provide insight into these molecular details of protein:lipid interactions. This paper reports on 10 simulations of individual alpha-helices in explicit lipid bilayers. The 10 helices were selected from the bacteriorhodopsin structure as representative alpha-helical membrane folding components. The bilayer is constructed of dimyristoyl phosphatidylcholine molecules. The only major difference between simulations is the primary sequence of the alpha-helix. The results show dramatic differences in motional behavior between alpha-helices. For example, helix A has much smaller root-mean-squared deviations than does helix D. This can be understood in terms of the presence of aromatic residues at the interface for helix A that are not present in helix D. Additional motions are possible for the helices that contain proline side chains relative to other amino acids. The results thus provide insight into the types of motion and the average structures possible for helices within the bilayer setting and demonstrate the strength of molecular simulations in providing molecular details that are not directly visualized in experiments.

  19. Audio–visual interactions for motion perception in depth modulate activity in visual area V3A

    PubMed Central

    Ogawa, Akitoshi; Macaluso, Emiliano

    2013-01-01

    Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414

  20. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  1. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  2. fMRI response during visual motion stimulation in patients with late whiplash syndrome.

    PubMed

    Freitag, P; Greenlee, M W; Wachter, K; Ettlin, T M; Radue, E W

    2001-01-01

    After whiplash trauma, up to one fourth of patients develop chronic symptoms including head and neck pain and cognitive disturbances. Resting perfusion single-photon-emission computed tomography (SPECT) found decreased temporoparietooccipital tracer uptake among these long-term symptomatic patients with late whiplash syndrome. As MT/MST (V5/V5a) are located in that area, this study addressed the question whether these patients show impairments in visual motion perception. We examined five symptomatic patients with late whiplash syndrome, five asymptomatic patients after whiplash trauma, and a control group of seven volunteers without the history of trauma. Tests for visual motion perception and functional magnetic resonance imaging (fMRI) measurements during visual motion stimulation were performed. Symptomatic patients showed a significant reduction in their ability to perceive coherent visual motion compared with controls, whereas the asymptomatic patients did not show this effect. fMRI activation was similar during random dot motion in all three groups, but was significantly decreased during coherent dot motion in the symptomatic patients compared with the other two groups. Reduced psychophysical motion performance and reduced fMRI responses in symptomatic patients with late whiplash syndrome both point to a functional impairment in cortical areas sensitive to coherent motion. Larger studies are needed to confirm these clinical and functional imaging results to provide a possible additional diagnostic criterion for the evaluation of patients with late whiplash syndrome.

  3. Seeing Circles and Drawing Ellipses: When Sound Biases Reproduction of Visual Motion

    PubMed Central

    Aramaki, Mitsuko; Bringoux, Lionel; Ystad, Sølvi; Kronland-Martinet, Richard

    2016-01-01

    The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context. PMID:27119411

  4. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  5. Visualization of Kepler's Laws of Planetary Motion

    ERIC Educational Resources Information Center

    Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong

    2017-01-01

    For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler's laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler's laws of planetary motion to be visualized and will contribute to improving the…

  6. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  7. Visuomotor adaptation to a visual rotation is gravity dependent.

    PubMed

    Toma, Simone; Sciutti, Alessandra; Papaxanthis, Charalambos; Pozzo, Thierry

    2015-03-15

    Humans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90° visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure. Copyright © 2015 the American Physiological Society.

  8. Determination of Motion and Visual System Requirements for Flight Training Simulators

    DTIC Science & Technology

    1981-08-01

    maneuvers, and making use of the learned response of the aircraft by em- ploying increasingly more precognitive control actions. A convenient means of...istics, and external disturbances. Beyond this we also want to consider skill development in terms of compensatory, pursuit; and precognitive behavior...essentially complete knowledge of the man-machine characteristics, i.e., be a complete internal model. Although this might be H plausible at the precognitive

  9. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  10. Correction of Hysteretic Respiratory Motion in SPECT Myocardial Perfusion Imaging: Simulation and Patient Studies

    PubMed Central

    Dasari, Paul K. R.; Könik, Arda; Pretorius, P. Hendrik; Johnson, Karen L.; Segars, William P.; Shazeeb, Mohammed. S.; King, Michael A.

    2017-01-01

    Purpose Amplitude based respiratory gating is known to capture the extent of respiratory motion (RM) accurately but results in residual motion in the presence of respiratory hysteresis. In our previous study, we proposed and developed a novel approach to account for respiratory hysteresis by applying the Bouc-Wen (BW) model of hysteresis to external surrogate signals of anterior / posterior motion of the abdomen and chest with respiration. In this work using simulated and clinical SPECT myocardial perfusion imaging (MPI) studies, we investigate the effects of respiratory hysteresis and evaluate the benefit of correcting it using the proposed BW model in comparison with the abdomen signal typically employed clinically. Methods The MRI navigator data acquired in free breathing human volunteers were used in the specially modified 4-D NCAT phantoms to allow simulating three types of respiratory patterns: monotonic, mild-hysteresis, and strong-hysteresis with normal myocardial uptake, and perfusion defects in the anterior, lateral, inferior, and septal locations of the mid-ventricular wall. Clinical scans were performed using a 99mTc-Sestamibi MPI protocol while recording respiratory signals from thoracic and abdomen regions using a Visual Tracking System (VTS). The performance of the correction using the respiratory signals was assessed through polar map analysis in phantom and ten clinical studies selected on the basis of having substantial RM. Results In phantom studies, simulations illustrating normal myocardial uptake showed significant differences (p<0.001) in the uniformity of the polar maps between the RM uncorrected and corrected. No significant differences were seen in the polar map uniformity across the RM corrections. Studies simulating perfusion defects showed significantly decreased errors (p<0.001) in defect severity and extent for the RM corrected compared to the uncorrected. Only for the strong-hysteretic pattern was there a significant difference (p<0.001) among the RM corrections. The errors in defect severity and extent for the RM correction using abdomen signal were significantly higher compared to that of the BW (severity=-4.0%, p<0.001; extent=-65.4%, p<0.01) and chest (severity=-4.1%, p<0.001; extent=-52.5%, p<0.01) signals. In clinical studies, the quantitative analysis of the polar maps demonstrated qualitative and quantitative but not statistically significant differences (p=0.73) between the correction methods that used the BW signal and the abdominal signal. Conclusions This study shows that hysteresis in respiration affects the extent of residual motion left in the RM binned data, which can impact wall uniformity and the visualization of defects. Thus there appears to be the potential for improved accuracy in reconstruction in the presence of hysteretic RM with the BW model method providing a possible step in the direction of improvement. PMID:28032913

  11. Undergraduate Labs for Biological Physics: Brownian Motion and Optical Trapping

    NASA Astrophysics Data System (ADS)

    Chu, Kelvin; Laughney, A.; Williams, J.

    2006-12-01

    We describe a set of case-study driven labs for an upper-division biological physics course. These labs are motivated by case-studies and consist of inquiry-driven investigations of Brownian motion and optical-trapping experiments. Each lab incorporates two innovative educational techniques to drive the process and application aspects of scientific learning. Case studies are used to encourage students to think independently and apply the scientific method to a novel lab situation. Student input from this case study is then used to decide how to best do the measurement, guide the project and ultimately evaluate the success of the program. Where appropriate, visualization and simulation using VPython is used. Direct visualization of Brownian motion allows students to directly calculate Avogadro's number or the Boltzmann constant. Following case-study driven discussion, students use video microscopy to measure the motion of latex spheres in different viscosity fluids arrive at a good approximation of NA or kB. Optical trapping (laser tweezer) experiments allow students to investigate the consequences of 100-pN forces on small particles. The case study consists of a discussion of the Boltzmann distribution and equipartition theorem followed by a consideration of the shape of the potential. Students can then use video capture to measure the distribution of bead positions to determine the shape and depth of the trap. This work supported by NSF DUE-0536773.

  12. Dual processing of visual rotation for bipedal stance control.

    PubMed

    Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene

    2016-10-01

    When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  13. Tilt and Translation Motion Perception during Pitch Tilt with Visual Surround Translation

    NASA Technical Reports Server (NTRS)

    O'Sullivan, Brita M.; Harm, Deborah L.; Reschke, Millard F.; Wood, Scott J.

    2006-01-01

    The central nervous system must resolve the ambiguity of inertial motion sensory cues in order to derive an accurate representation of spatial orientation. Previous studies suggest that multisensory integration is critical for discriminating linear accelerations arising from tilt and translation head motion. Visual input is especially important at low frequencies where canal input is declining. The NASA Tilt Translation Device (TTD) was designed to recreate postflight orientation disturbances by exposing subjects to matching tilt self motion with conflicting visual surround translation. Previous studies have demonstrated that brief exposures to pitch tilt with foreaft visual surround translation produced changes in compensatory vertical eye movement responses, postural equilibrium, and motion sickness symptoms. Adaptation appeared greatest with visual scene motion leading (versus lagging) the tilt motion, and the adaptation time constant appeared to be approximately 30 min. The purpose of this study was to compare motion perception when the visual surround translation was inphase versus outofphase with pitch tilt. The inphase stimulus presented visual surround motion one would experience if the linear acceleration was due to foreaft self translation within a stationary surround, while the outofphase stimulus had the visual scene motion leading the tilt by 90 deg as previously used. The tilt stimuli in these conditions were asymmetrical, ranging from an upright orientation to 10 deg pitch back. Another objective of the study was to compare motion perception with the inphase stimulus when the tilts were asymmetrical relative to upright (0 to 10 deg back) versus symmetrical (10 deg forward to 10 deg back). Twelve subjects (6M, 6F, 22-55 yrs) were tested during 3 sessions separated by at least one week. During each of the three sessions (out-of-phase asymmetrical, in-phase asymmetrical, inphase symmetrical), subjects were exposed to visual surround translation synchronized with pitch tilt at 0.1 Hz for a total of 30 min. Tilt and translation motion perception was obtained from verbal reports and a joystick mounted on a linear stage. Horizontal vergence and vertical eye movements were obtained with a binocular video system. Responses were also obtained during darkness before and following 15 min and 30 min of visual surround translation. Each of the three stimulus conditions involving visual surround translation elicited a significantly reduced sense of perceived tilt and strong linear vection (perceived translation) compared to pre-exposure tilt stimuli in darkness. This increase in perceived translation with reduction in tilt perception was also present in darkness following 15 and 30 min exposures, provided the tilt stimuli were not interrupted. Although not significant, there was a trend for the inphase asymmetrical stimulus to elicit a stronger sense of both translation and tilt than the out-of-phase asymmetrical stimulus. Surprisingly, the inphase asymmetrical stimulus also tended to elicit a stronger sense of peak-to-peak translation than the inphase symmetrical stimulus, even though the range of linear acceleration during the symmetrical stimulus was twice that of the asymmetrical stimulus. These results are consistent with the hypothesis that the central nervous system resolves the ambiguity of inertial motion sensory cues by integrating inputs from visual, vestibular, and somatosensory systems.

  14. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, Rohini; Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA; Chung, Theodore D.

    2006-07-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathedmore » without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating.« less

  15. Predator pursuit strategies: how do falcons and hawks chase prey?

    NASA Astrophysics Data System (ADS)

    Kane, Suzanne Amador; Zamani, Marjon; Fulton, Andrew; Rosenthal, Lee

    2014-03-01

    This study reports on experiments on falcons, goshawks and red-tailed hawks wearing miniature videocameras mounted on their backs or heads while pursuing flying or ground-based prey. Videos of hunts recorded by the raptors were analyzed to determine apparent prey positions on their visual fields during pursuits. These video data then were interpreted using computer simulations of pursuit steering laws observed in insects and mammals. A comparison of the empirical and modeling data indicates that falcons use cues due to the apparent motion of prey on the falcon's visual field to track and capture flying prey via a form of motion camouflage. The falcons also were found to maintain their prey's image at visual angles consistent with using their shallow fovea. Results for goshawks and red-tailed hawks were analyzed for a comparative study of how pursuits of ground-based prey by accipeters and buteos differ from those used by falcons chasing flying prey. These results should prove relevant for understanding the coevolution of pursuit and evasion, as well as the development of computer models of predation on flocks,and the integration of sensory and locomotion systems in biomimetic robots.

  16. Vection and visually induced motion sickness: how are they related?

    PubMed Central

    Keshavarz, Behrang; Riecke, Bernhard E.; Hettinger, Lawrence J.; Campos, Jennifer L.

    2015-01-01

    The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future. PMID:25941509

  17. Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms

    NASA Technical Reports Server (NTRS)

    Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.

    2005-01-01

    The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.

  18. A tactile display for international space station (ISS) extravehicular activity (EVA).

    PubMed

    Rochlis, J L; Newman, D J

    2000-06-01

    A tactile display to increase an astronaut's situational awareness during an extravehicular activity (EVA) has been developed and ground tested. The Tactor Locator System (TLS) is a non-intrusive, intuitive display capable of conveying position and velocity information via a vibrotactile stimulus applied to the subject's neck and torso. In the Earth's 1 G environment, perception of position and velocity is determined by the body's individual sensory systems. Under normal sensory conditions, redundant information from these sensory systems provides humans with an accurate sense of their position and motion. However, altered environments, including exposure to weightlessness, can lead to conflicting visual and vestibular cues, resulting in decreased situational awareness. The TLS was designed to provide somatosensory cues to complement the visual system during EVA operations. An EVA task was simulated on a computer graphics workstation with a display of the International Space Station (ISS) and a target astronaut at an unknown location. Subjects were required to move about the ISS and acquire the target astronaut using either an auditory cue at the outset, or the TLS. Subjects used a 6 degree of freedom input device to command translational and rotational motion. The TLS was configured to act as a position aid, providing target direction information to the subject through a localized stimulus. Results show that the TLS decreases reaction time (p = 0.001) and movement time (p = 0.001) for simulated subject (astronaut) motion around the ISS. The TLS is a useful aid in increasing an astronaut's situational awareness, and warrants further testing to explore other uses, tasks and configurations.

  19. Numerical analysis of two and three dimensional buoyancy driven water-exit of a circular cylinder

    NASA Astrophysics Data System (ADS)

    Moshari, Shahab; Nikseresht, Amir Hossein; Mehryar, Reza

    2014-06-01

    With the development of the technology of underwater moving bodies, the need for developing the knowledge of surface effect interaction of free surface and underwater moving bodies is increased. Hence, the two-phase flow is a subject which is interesting for many researchers all around the world. In this paper, the non-linear free surface deformations which occur during the water-exit of a circular cylinder due to its buoyancy are solved using finite volume discretization based code, and using Volume of Fluid (VOF) scheme for solving two phase flow. Dynamic mesh model is used to simulate dynamic motion of the cylinder. In addition, the effect of cylinder mass in presence of an external force is studied. Moreover, the oblique exit and entry of a circular cylinder with two exit angles is simulated. At last, water-exit of a circular cylinder in six degrees of freedom is simulated in 3D using parallel processing. The simulation errors of present work (using VOF method) for maximum velocity and height of a circular cylinder are less than the corresponding errors of level set method reported by previous researchers. Oblique exit shows interesting results; formation of waves caused by exit of the cylinder, wave motion in horizontal direction and the air trapped between the waves are observable. In 3D simulation the visualization of water motion on the top surface of the cylinder and the free surface breaking on the front and back faces of the 3D cylinder at the exit phase are observed which cannot be seen in 2D simulation. Comparing the results, 3D simulation shows better agreement with experimental data, specially in the maximum height position of the cylinder.

  20. Respiratory motion estimation in x-ray angiography for improved guidance during coronary interventions

    NASA Astrophysics Data System (ADS)

    Baka, N.; Lelieveldt, B. P. F.; Schultz, C.; Niessen, W.; van Walsum, T.

    2015-05-01

    During percutaneous coronary interventions (PCI) catheters and arteries are visualized by x-ray angiography (XA) sequences, using brief contrast injections to show the coronary arteries. If we could continue visualizing the coronary arteries after the contrast agent passed (thus in non-contrast XA frames), we could potentially lower contrast use, which is advantageous due to the toxicity of the contrast agent. This paper explores the possibility of such visualization in mono-plane XA acquisitions with a special focus on respiratory based coronary artery motion estimation. We use the patient specific coronary artery centerlines from pre-interventional 3D CTA images to project on the XA sequence for artery visualization. To achieve this, a framework for registering the 3D centerlines with the mono-plane 2D + time XA sequences is presented. During the registration the patient specific cardiac and respiratory motion is learned. We investigate several respiratory motion estimation strategies with respect to accuracy, plausibility and ease of use for motion prediction in XA frames with and without contrast. The investigated strategies include diaphragm motion based prediction, and respiratory motion extraction from the guiding catheter tip motion. We furthermore compare translational and rigid respiratory based heart motion. We validated the accuracy of the 2D/3D registration and the respiratory and cardiac motion estimations on XA sequences of 12 interventions. The diaphragm based motion model and the catheter tip derived motion achieved 1.58 mm and 1.83 mm median 2D accuracy, respectively. On a subset of four interventions we evaluated the artery visualization accuracy for non-contrast cases. Both diaphragm, and catheter tip based prediction performed similarly, with about half of the cases providing satisfactory accuracy (median error < 2 mm).

  1. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  2. Representation of visual gravitational motion in the human vestibular cortex.

    PubMed

    Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco

    2005-04-15

    How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.

  3. Multiplexing in the primate motion pathway.

    PubMed

    Huk, Alexander C

    2012-06-01

    This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.

  4. Experimental investigation of the visual field dependency in the erect and supine positions

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.; Saucer, R. T.

    1972-01-01

    The increasing utilization of simulators in many fields, in addition to aeronautics and space, requires the efficient use of these devices. It seemed that personnel highly influenced by the visual scene would make desirable subjects, particularly for those simulators without sufficient motion cues. In order to evaluate this concept, some measure of the degree of influence of the visual field on the subject in necessary. As part of this undertaking, 37 male and female subjects, including eight test pilots, were tested for their visual field dependency or independency. A version of Witkin's rod and frame apparatus was used for the tests. The results showed that nearly all the test subjects exhibited some degree of field dependency, the degree varying from very high field dependency to nearly zero field dependency in a normal distribution. The results for the test pilots were scattered throughout a range similar to the results for the bulk of male subjects. The few female subjects exhibited a higher field dependency than the male subjects. The male subjects exhibited a greater field dependency in the supine position than in the erect position, whereas the field dependency of the female subjects changed only slightly.

  5. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  6. Decentralized real-time simulation of forest machines

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael

    2000-10-01

    To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.

  7. Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity.

    PubMed

    Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L

    2016-08-02

    Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.

  8. Classification and simulation of stereoscopic artifacts in mobile 3DTV content

    NASA Astrophysics Data System (ADS)

    Boev, Atanas; Hollosi, Danilo; Gotchev, Atanas; Egiazarian, Karen

    2009-02-01

    We identify, categorize and simulate artifacts which might occur during delivery stereoscopic video to mobile devices. We consider the stages of 3D video delivery dataflow: content creation, conversion to the desired format (multiview or source-plus-depth), coding/decoding, transmission, and visualization on 3D display. Human 3D vision works by assessing various depth cues - accommodation, binocular depth cues, pictorial cues and motion parallax. As a consequence any artifact which modifies these cues impairs the quality of a 3D scene. The perceptibility of each artifact can be estimated through subjective tests. The material for such tests needs to contain various artifacts with different amounts of impairment. We present a system for simulation of these artifacts. The artifacts are organized in groups with similar origins, and each group is simulated by a block in a simulation channel. The channel introduces the following groups of artifacts: sensor limitations, geometric distortions caused by camera optics, spatial and temporal misalignments between video channels, spatial and temporal artifacts caused by coding, transmission losses, and visualization artifacts. For the case of source-plus-depth representation, artifacts caused by format conversion are added as well.

  9. Mental Rotation Meets the Motion Aftereffect: The Role of hV5/MT+ in Visual Mental Imagery

    ERIC Educational Resources Information Center

    Seurinck, Ruth; de Lange, Floris P.; Achten, Erik; Vingerhoets, Guy

    2011-01-01

    A growing number of studies show that visual mental imagery recruits the same brain areas as visual perception. Although the necessity of hV5/MT+ for motion perception has been revealed by means of TMS, its relevance for motion imagery remains unclear. We induced a direction-selective adaptation in hV5/MT+ by means of an MAE while subjects…

  10. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  11. Integrative cortical dysfunction and pervasive motion perception deficit in fragile X syndrome.

    PubMed

    Kogan, C S; Bertone, A; Cornish, K; Boutet, I; Der Kaloustian, V M; Andermann, E; Faubert, J; Chaudhuri, A

    2004-11-09

    Fragile X syndrome (FXS) is associated with neurologic deficits recently attributed to the magnocellular pathway of the lateral geniculate nucleus. To test the hypotheses that FXS individuals 1) have a pervasive visual motion perception impairment affecting neocortical circuits in the parietal lobe and 2) have deficits in integrative neocortical mechanisms necessary for perception of complex stimuli. Psychophysical tests of visual motion and form perception defined by either first-order (luminance) or second-order (texture) attributes were used to probe early and later occipito-temporal and occipito-parietal functioning. When compared to developmental- and age-matched controls, FXS individuals displayed severe impairments in first- and second-order motion perception. This deficit was accompanied by near normal perception for first-order form stimuli but not second-order form stimuli. Impaired visual motion processing for first- and second-order stimuli suggests that both early- and later-level neurologic function of the parietal lobe are affected in Fragile X syndrome (FXS). Furthermore, this deficit likely stems from abnormal input from the magnocellular compartment of the lateral geniculate nucleus. Impaired visual form and motion processing for complex visual stimuli with normal processing for simple (i.e., first-order) form stimuli suggests that FXS individuals have normal early form processing accompanied by a generalized impairment in neurologic mechanisms necessary for integrating all early visual input.

  12. Multiple-stage ambiguity in motion perception reveals global computation of local motion directions.

    PubMed

    Rider, Andrew T; Nishida, Shin'ya; Johnston, Alan

    2016-12-01

    The motion of a 1D image feature, such as a line, seen through a small aperture, or the small receptive field of a neural motion sensor, is underconstrained, and it is not possible to derive the true motion direction from a single local measurement. This is referred to as the aperture problem. How the visual system solves the aperture problem is a fundamental question in visual motion research. In the estimation of motion vectors through integration of ambiguous local motion measurements at different positions, conventional theories assume that the object motion is a rigid translation, with motion signals sharing a common motion vector within the spatial region over which the aperture problem is solved. However, this strategy fails for global rotation. Here we show that the human visual system can estimate global rotation directly through spatial pooling of locally ambiguous measurements, without an intervening step that computes local motion vectors. We designed a novel ambiguous global flow stimulus, which is globally as well as locally ambiguous. The global ambiguity implies that the stimulus is simultaneously consistent with both a global rigid translation and an infinite number of global rigid rotations. By the standard view, the motion should always be seen as a global translation, but it appears to shift from translation to rotation as observers shift fixation. This finding indicates that the visual system can estimate local vectors using a global rotation constraint, and suggests that local motion ambiguity may not be resolved until consistencies with multiple global motion patterns are assessed.

  13. Decoding conjunctions of direction-of-motion and binocular disparity from human visual cortex.

    PubMed

    Seymour, Kiley J; Clifford, Colin W G

    2012-05-01

    Motion and binocular disparity are two features in our environment that share a common correspondence problem. Decades of psychophysical research dedicated to understanding stereopsis suggest that these features interact early in human visual processing to disambiguate depth. Single-unit recordings in the monkey also provide evidence for the joint encoding of motion and disparity across much of the dorsal visual stream. Here, we used functional MRI and multivariate pattern analysis to examine where in the human brain conjunctions of motion and disparity are encoded. Subjects sequentially viewed two stimuli that could be distinguished only by their conjunctions of motion and disparity. Specifically, each stimulus contained the same feature information (leftward and rightward motion and crossed and uncrossed disparity) but differed exclusively in the way these features were paired. Our results revealed that a linear classifier could accurately decode which stimulus a subject was viewing based on voxel activation patterns throughout the dorsal visual areas and as early as V2. This decoding success was conditional on some voxels being individually sensitive to the unique conjunctions comprising each stimulus, thus a classifier could not rely on independent information about motion and binocular disparity to distinguish these conjunctions. This study expands on evidence that disparity and motion interact at many levels of human visual processing, particularly within the dorsal stream. It also lends support to the idea that stereopsis is subserved by early mechanisms also tuned to direction of motion.

  14. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth; Cowley, Matthew S.; Harvill. Lauren; Rajulu, Sudhakar

    2014-01-01

    When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. One of our key functions is to help design engineers understand how a human will perform with new designs and all too often traditional use of Euler rotations becomes as much of a hindrance as a help. It is believed that using a spherical coordinate system will allow ABF personnel to more quickly and easily transmit important mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project is to establish new analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify the method before it was implemented in the ABF's data analysis practices. The first stage was a proof of concept, where a mechanical test rig was built and instrumented with an inclinometer, so that its angle from horizontal was known. The test rig was tracked in 3D using an optical motion capture system, and its position and orientation were reported in both Euler and spherical reference systems. The rig was meant to simulate flexion/extension, transverse rotation and abduction/adduction of the human shoulder, but without the variability inherent in human motion. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder, to include the torso, knees, ankle, elbows, wrists and neck. Part of this update included adding a representation of 'roll' about an axis, for upper arm and lower leg rotations. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. This visualization method will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development.

  15. Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque

    PubMed Central

    Kaneko, Takaaki; Saleem, Kadharbatcha S.; Berman, Rebecca A.; Leopold, David A.

    2016-01-01

    Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. SIGNIFICANCE STATEMENT Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This “reafferent” motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. PMID:27629710

  16. Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque.

    PubMed

    Russ, Brian E; Kaneko, Takaaki; Saleem, Kadharbatcha S; Berman, Rebecca A; Leopold, David A

    2016-09-14

    Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This "reafferent" motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. Copyright © 2016 the authors 0270-6474/16/369580-10$15.00/0.

  17. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  18. Close similarity between spatiotemporal frequency tunings of human cortical responses and involuntary manual following responses to visual motion.

    PubMed

    Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki

    2009-02-01

    Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.

  19. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  20. Protein normal-mode dynamics: trypsin inhibitor, crambin, ribonuclease and lysozyme.

    PubMed

    Levitt, M; Sander, C; Stern, P S

    1985-02-05

    We have developed a new method for modelling protein dynamics using normal-mode analysis in internal co-ordinates. This method, normal-mode dynamics, is particularly well suited for modelling collective motion, makes possible direct visualization of biologically interesting modes, and is complementary to the more time-consuming simulation of molecular dynamics trajectories. The essential assumption and limitation of normal-mode analysis is that the molecular potential energy varies quadratically. Our study starts with energy minimization of the X-ray co-ordinates with respect to the single-bond torsion angles. The main technical task is the calculation of second derivative matrices of kinetic and potential energy with respect to the torsion angle co-ordinates. These enter into a generalized eigenvalue problem, and the final eigenvalues and eigenvectors provide a complete description of the motion in the basic 0.1 to 10 picosecond range. Thermodynamic averages of amplitudes, fluctuations and correlations can be calculated efficiently using analytical formulae. The general method presented here is applied to four proteins, trypsin inhibitor, crambin, ribonuclease and lysozyme. When the resulting atomic motion is visualized by computer graphics, it is clear that the motion of each protein is collective with all atoms participating in each mode. The slow modes, with frequencies of below 10 cm-1 (a period of 3 ps), are the most interesting in that the motion in these modes is segmental. The root-mean-square atomic fluctuations, which are dominated by a few slow modes, agree well with experimental temperature factors (B values). The normal-mode dynamics of these four proteins have many features in common, although in the larger molecules, lysozyme and ribonuclease, there is low frequency domain motion about the active site.

  1. Spatial and Global Sensory Suppression Mapping Encompassing the Central 10° Field in Anisometropic Amblyopia.

    PubMed

    Li, Jingjing; Li, Jinrong; Chen, Zidong; Liu, Jing; Yuan, Junpeng; Cai, Xiaoxiao; Deng, Daming; Yu, Minbin

    2017-01-01

    We investigate the efficacy of a novel dichoptic mapping paradigm in evaluating visual function of anisometropic amblyopes. Using standard clinical measures of visual function (visual acuity, stereo acuity, Bagolini lenses, and neutral density filters) and a novel quantitative mapping technique, 26 patients with anisometropic amblyopia (mean age = 19.15 ± 4.42 years) were assessed. Two additional psychophysical interocular suppression measurements were tested with dichoptic global motion coherence and binocular phase combination tasks. Luminance reduction was achieved by placing neutral density filters in front of the normal eye. Our study revealed that suppression changes across the central 10° visual field by mean luminance modulation in amblyopes as well as normal controls. Using simulation and an elimination of interocular suppression, we identified a novel method to effectively reflect the distribution of suppression in anisometropic amblyopia. Additionally, the new quantitative mapping technique was in good agreement with conventional clinical measures, such as interocular acuity difference (P < 0.001) and stereo acuity (P = 0.005). There was a good consistency between the results of interocular suppression with dichoptic mapping paradigm and the results of the other two psychophysical methods (suppression mapping versus binocular phase combination, P < 0.001; suppression mapping versus global motion coherence, P = 0.005). The dichoptic suppression mapping technique is an effective method to represent impaired visual function in patients with anisometropic amblyopia. It offers a potential in "micro-"antisuppression mapping tests and therapies for amblyopia.

  2. Impaired Velocity Processing Reveals an Agnosia for Motion in Depth.

    PubMed

    Barendregt, Martijn; Dumoulin, Serge O; Rokers, Bas

    2016-11-01

    Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers' ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion. © The Author(s) 2016.

  3. Global motion perception is related to motor function in 4.5-year-old children born at risk of abnormal development.

    PubMed

    Chakraborty, Arijit; Anstice, Nicola S; Jacobs, Robert J; Paudel, Nabin; LaGasse, Linda L; Lester, Barry M; McKinlay, Christopher J D; Harding, Jane E; Wouldes, Trecia A; Thompson, Benjamin

    2017-06-01

    Global motion perception is often used as an index of dorsal visual stream function in neurodevelopmental studies. However, the relationship between global motion perception and visuomotor control, a primary function of the dorsal stream, is unclear. We measured global motion perception (motion coherence threshold; MCT) and performance on standardized measures of motor function in 606 4.5-year-old children born at risk of abnormal neurodevelopment. Visual acuity, stereoacuity and verbal IQ were also assessed. After adjustment for verbal IQ or both visual acuity and stereoacuity, MCT was modestly, but significantly, associated with all components of motor function with the exception of fine motor scores. In a separate analysis, stereoacuity, but not visual acuity, was significantly associated with both gross and fine motor scores. These results indicate that the development of motion perception and stereoacuity are associated with motor function in pre-school children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Effects of Vibrotactile Feedback on Human Learning of Arm Motions

    PubMed Central

    Bark, Karlin; Hyman, Emily; Tan, Frank; Cha, Elizabeth; Jax, Steven A.; Buxbaum, Laurel J.; Kuchenbecker, Katherine J.

    2015-01-01

    Tactile cues generated from lightweight, wearable actuators can help users learn new motions by providing immediate feedback on when and how to correct their movements. We present a vibrotactile motion guidance system that measures arm motions and provides vibration feedback when the user deviates from a desired trajectory. A study was conducted to test the effects of vibrotactile guidance on a subject’s ability to learn arm motions. Twenty-six subjects learned motions of varying difficulty with both visual (V), and visual and vibrotactile (VVT) feedback over the course of four days of training. After four days of rest, subjects returned to perform the motions from memory with no feedback. We found that augmenting visual feedback with vibrotactile feedback helped subjects reduce the root mean square (rms) angle error of their limb significantly while they were learning the motions, particularly for 1DOF motions. Analysis of the retention data showed no significant difference in rms angle errors between feedback conditions. PMID:25486644

  5. Motion-Compensated Compression of Dynamic Voxelized Point Clouds.

    PubMed

    De Queiroz, Ricardo L; Chou, Philip A

    2017-05-24

    Dynamic point clouds are a potential new frontier in visual communication systems. A few articles have addressed the compression of point clouds, but very few references exist on exploring temporal redundancies. This paper presents a novel motion-compensated approach to encoding dynamic voxelized point clouds at low bit rates. A simple coder breaks the voxelized point cloud at each frame into blocks of voxels. Each block is either encoded in intra-frame mode or is replaced by a motion-compensated version of a block in the previous frame. The decision is optimized in a rate-distortion sense. In this way, both the geometry and the color are encoded with distortion, allowing for reduced bit-rates. In-loop filtering is employed to minimize compression artifacts caused by distortion in the geometry information. Simulations reveal that this simple motion compensated coder can efficiently extend the compression range of dynamic voxelized point clouds to rates below what intra-frame coding alone can accommodate, trading rate for geometry accuracy.

  6. Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning

    PubMed Central

    Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka

    2012-01-01

    Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849

  7. Using EMG to anticipate head motion for virtual-environment applications

    NASA Technical Reports Server (NTRS)

    Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion

    2005-01-01

    In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.

  8. Using EMG to anticipate head motion for virtual-environment applications.

    PubMed

    Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion

    2005-06-01

    In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.

  9. Local structure in anisotropic systems determined by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Komolkin, Andrei V.; Maliniak, Arnold

    In the present communication we describe the investigation of local structure using a new visualization technique. The approach is based on two-dimensional pair correlation functions derived from a molecular dynamics computer simulation. We have used this method to analyse a trajectory produced in a simulation of a nematic liquid crystal of 4-n-pentyl-4'-cyanobiphenyl (5CB) (Komolkin et al., 1994, J. chem. Phys., 101, 4103). The molecule is assumed to have cylindrical symmetry, and the liquid crystalline phase is treated as uniaxial. The pair correlation functions or cylindrical distribution functions (CDFs) are calculated in the molecular (m) and laboratory (l) frames, gm2(z1 2, d1 2) and g12(Z1 2, D1 2). Anisotropic molecular organization in the liquid crystal is reflected in laboratory frame CDFs. The molecular excluded volume is determined and the effect of the fast motion in the alkyl chain is observed. The intramolecular distributions are included in the CDFs and indicate the size of the motional amplitude in the chain. Absence of long range order was confirmed, a feature typical for a nematic liquid crystal.

  10. Dynamics of visual feedback in a laboratory simulation of a penalty kick.

    PubMed

    Morya, Edgard; Ranvaud, Ronald; Pinheiro, Walter Machado

    2003-02-01

    Sport scientists have devoted relatively little attention to soccer penalty kicks, despite their decisive role in important competitions such as the World Cup. Two possible kicker strategies have been described: ignoring the goalkeeper action (open loop) or trying to react to the goalkeeper action (closed loop). We used a paradigm simulating a penalty kick in the laboratory to investigate the dynamics of the closed-loop strategy in these controlled conditions. The probability of correctly responding to the simulated goalkeeper motion as a function of time available followed a logistic curve. Kickers on average reached perfect performance only if the goalkeeper committed him or herself to one side about 400 ms before ball contact and showed chance performance if the goalkeeper motion occurred less than 150 ms before ball contact. Interestingly, coincidence judgement--another aspect of the laboratory responses--appeared to be affected for a much longer time (> 500 ms) than was needed to correctly determine laterality. The present study is meant as groundwork for experiments in more ecological conditions applicable to kickers and goalkeepers.

  11. Development of a Rover Simulation to Assess Operational Proficiency Following Long Duration Spaceflights

    NASA Technical Reports Server (NTRS)

    DeDios, Y. E.; Dean, S. L.; Rpsemtja (. K/); < acdpig (as/ J/ G/); Moore, S. T.; Wood, S. J.

    2011-01-01

    Following long-duration space transits, adaptive changes in sensorimotor and cognitive function may impair the crew s ability to safely control pressurized rovers designed to explore the new environment. We describe a rover simulation developed to quantify post-flight decrements in operational proficiency following International Space Station expeditions. The rover simulation consists of a serial presentation of discrete tasks to be completed as quickly and accurately as possible. Each task consists of 1) perspective taking using a map that defines a docking target, 2) navigation toward the target around a Martian outpost, and 3) docking a side hatch of the rover to a visually guided target. The simulator utilizes a Stewart-type motion base (CKAS, Australia), single seat cabin with triple scene projection covering approximately 150 horizontal by 40 vertical, and joystick controller. The software was implemented using Unity3 with next-gen PhysX engine to tightly synchronize simulation and motion platform commands. Separate C# applications allow investigators to customize session sequences with different lighting and gravitational conditions, and then execute tasks to be performed as well as record performance data. Preliminary tests resulted in low incidence of motion sickness (<15% unable to complete first session), with only negligible after effects and symptoms after familiarization sessions. Functionally relevant testing early post-flight will develop evidence regarding the limitations to early surface operations and what countermeasures are needed. This approach can be easily adapted to other vehicle designs to provide a platform to safely assess how sensorimotor and cognitive function impact manual control performance.

  12. Visual event-related potentials to biological motion stimuli in autism spectrum disorders

    PubMed Central

    Bletsch, Anke; Krick, Christoph; Siniatchkin, Michael; Jarczok, Tomasz A.; Freitag, Christine M.; Bender, Stephan

    2014-01-01

    Atypical visual processing of biological motion contributes to social impairments in autism spectrum disorders (ASD). However, the exact temporal sequence of deficits of cortical biological motion processing in ASD has not been studied to date. We used 64-channel electroencephalography to study event-related potentials associated with human motion perception in 17 children and adolescents with ASD and 21 typical controls. A spatio-temporal source analysis was performed to assess the brain structures involved in these processes. We expected altered activity already during early stimulus processing and reduced activity during subsequent biological motion specific processes in ASD. In response to both, random and biological motion, the P100 amplitude was decreased suggesting unspecific deficits in visual processing, and the occipito-temporal N200 showed atypical lateralization in ASD suggesting altered hemispheric specialization. A slow positive deflection after 400 ms, reflecting top-down processes, and human motion-specific dipole activation differed slightly between groups, with reduced and more diffuse activation in the ASD-group. The latter could be an indicator of a disrupted neuronal network for biological motion processing in ADS. Furthermore, early visual processing (P100) seems to be correlated to biological motion-specific activation. This emphasizes the relevance of early sensory processing for higher order processing deficits in ASD. PMID:23887808

  13. Non-Verbal IQ Is Correlated with Visual Field Advantages for Short Duration Coherent Motion Detection in Deaf Signers with Varied ASL Exposure and Etiologies of Deafness

    ERIC Educational Resources Information Center

    Samar, Vincent J.; Parasnis, Ila

    2007-01-01

    Studies have reported a right visual field (RVF) advantage for coherent motion detection by deaf and hearing signers but not non-signers. Yet two studies [Bosworth R. G., & Dobkins, K. R. (2002). Visual field asymmetries for motion processing in deaf and hearing signers. "Brain and Cognition," 49, 170-181; Samar, V. J., & Parasnis, I. (2005).…

  14. Orientation of selective effects of body tilt on visually induced perception of self-motion.

    PubMed

    Nakamura, S; Shimojo, S

    1998-10-01

    We examined the effect of body posture upon visually induced perception of self-motion (vection) with various angles of observer's tilt. The experiment indicated that the tilted body of observer could enhance perceived strength of vertical vection, while there was no effect of body tilt on horizontal vection. This result suggests that there is an interaction between the effects of visual and vestibular information on perception of self-motion.

  15. Re-examining overlap between tactile and visual motion responses within hMT+ and STS

    PubMed Central

    Jiang, Fang; Beauchamp, Michael S.; Fine, Ione

    2015-01-01

    Here we examine overlap between tactile and visual motion BOLD responses within the human MT+ complex. Although several studies have reported tactile responses overlapping with hMT+, many used group average analyses, leaving it unclear whether these responses were restricted to sub-regions of hMT+. Moreover, previous studies either employed a tactile task or passive stimulation, leaving it unclear whether or not tactile responses in hMT+ are simply the consequence of visual imagery. Here we carried out a replication of one of the classic papers finding tactile responses in hMT+ (Hagen et al. 2002). We mapped MT and MST in individual subjects using visual field localizers. We then examined responses to tactile motion on the arm, either presented passively or in the presence of a visual task performed at fixation designed to minimize visualization of the concurrent tactile stimulation. To our surprise, without a visual task, we found only weak tactile motion responses in MT (6% of voxels showing tactile responses) and MST (2% of voxels). With an unrelated visual task designed to withdraw attention from the tactile modality, responses in MST reduced to almost nothing (<1% regions). Consistent with previous results, we did observe tactile responses in STS regions superior and anterior to hMT+. Despite the lack of individual overlap, group averaged responses produced strong spurious overlap between tactile and visual motion responses within hMT+ that resembled those observed in previous studies. The weak nature of tactile responses in hMT+ (and their abolition by withdrawal of attention) suggests that hMT+ may not serve as a supramodal motion processing module. PMID:26123373

  16. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning

    PubMed Central

    Larcombe, Stephanie J.; Kennard, Chris

    2017-01-01

    Abstract Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145–156, 2018. © 2017 Wiley Periodicals, Inc. PMID:28963815

  17. Slushy weightings for the optimal pilot model. [considering visual tracking task

    NASA Technical Reports Server (NTRS)

    Dillow, J. D.; Picha, D. G.; Anderson, R. O.

    1975-01-01

    A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.

  18. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    ERIC Educational Resources Information Center

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  19. Anisotropies in the perceived spatial displacement of motion-defined contours: opposite biases in the upper-left and lower-right visual quadrants.

    PubMed

    Fan, Zhao; Harris, John

    2010-10-12

    In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Visualizing complex hydrodynamic features

    NASA Astrophysics Data System (ADS)

    Kempf, Jill L.; Marshall, Robert E.; Yen, Chieh-Cheng

    1990-08-01

    The Lake Erie Forecasting System is a cooperative project by university, private and governmental institutions to provide continuous forecasting of three-dimensional structure within the lake. The forecasts will include water velocity and temperature distributions throughout the body of water, as well as water level and wind-wave distributions at the lake's surface. Many hydrodynamic features can be extracted from this data, including coastal jets, large-scale thermocline motion and zones of upwelling and downwelling. A visualization system is being developed that will aid in understanding these features and their interactions. Because of the wide variety of features, they cannot all be adequately represented by a single rendering technique. Particle tracing, surface rendering, and volumetric techniques are all necessary. This visualization effortis aimed towards creating a system that will provide meaningful forecasts for those using the lake for recreational and commercial purposes. For example, the fishing industry needs to know about large-scale thermocline motion in order to find the best fishing areas and power plants need to know water intAke temperatures. The visualization system must convey this information in a manner that is easily understood by these users. Scientists must also be able to use this system to verify their hydrodynamic simulation. The focus of the system, therefore, is to provide the information to serve these diverse interests, without overwhelming any single user with unnecessary data.

  1. Development of real time abdominal compression force monitoring and visual biofeedback system

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk

    2018-03-01

    In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could improve the quality of respiratory tumor motion management in abdominal compression radiation therapy.

  2. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    PubMed

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  3. A virtual simulator designed for collision prevention in proton therapy.

    PubMed

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho

    2015-10-01

    In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.

  4. Highly immersive virtual reality laparoscopy simulation: development and future aspects.

    PubMed

    Huber, Tobias; Wunderling, Tom; Paschold, Markus; Lang, Hauke; Kneist, Werner; Hansen, Christian

    2018-02-01

    Virtual reality (VR) applications with head-mounted displays (HMDs) have had an impact on information and multimedia technologies. The current work aimed to describe the process of developing a highly immersive VR simulation for laparoscopic surgery. We combined a VR laparoscopy simulator (LapSim) and a VR-HMD to create a user-friendly VR simulation scenario. Continuous clinical feedback was an essential aspect of the development process. We created an artificial VR (AVR) scenario by integrating the simulator video output with VR game components of figures and equipment in an operating room. We also created a highly immersive VR surrounding (IVR) by integrating the simulator video output with a [Formula: see text] video of a standard laparoscopy scenario in the department's operating room. Clinical feedback led to optimization of the visualization, synchronization, and resolution of the virtual operating rooms (in both the IVR and the AVR). Preliminary testing results revealed that individuals experienced a high degree of exhilaration and presence, with rare events of motion sickness. The technical performance showed no significant difference compared to that achieved with the standard LapSim. Our results provided a proof of concept for the technical feasibility of an custom highly immersive VR-HMD setup. Future technical research is needed to improve the visualization, immersion, and capability of interacting within the virtual scenario.

  5. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    PubMed

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  6. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  7. A proto-type design of a real-tissue phantom for the validation of deformation algorithms and 4D dose calculations

    NASA Astrophysics Data System (ADS)

    Szegedi, M.; Rassiah-Szegedi, P.; Fullerton, G.; Wang, B.; Salter, B.

    2010-07-01

    The purpose of this study is to design a real-tissue phantom for use in the validation of deformation algorithms. A phantom motion controller that runs sinusoidal and non-regular patient-based breathing pattern, via a piston, was applied to porcine liver tissue. It was regulated to simulate movement ranges similar to recorded implanted liver markers from patients. 4D CT was applied to analyze deformation. The suitability of various markers in the liver and the position reproducibility of markers and of reference points were studied. The similarity of marker motion pattern in the liver phantom and in real patients was evaluated. The viability of the phantom over time and its use with electro-magnetic tracking devices were also assessed. High contrast markers, such as carbon markers, implanted in the porcine liver produced less image artifacts on CT and were well visualized compared to metallic ones. The repositionability of markers was within a measurement accuracy of ±2 mm. Similar anatomical patient motions were reproducible up to elongations of 3 cm for a time period of at least 90 min. The phantom is compatible with electro-magnetic tracking devices and 4D CT. The phantom motion is reproducible and simulates realistic patient motion and deformation. The ability to carry out voxel-based tracking allows for the evaluation of deformation algorithms in a controlled environment with recorded patient traces. The phantom is compatible with all therapy devices clinically encountered in our department.

  8. Sunglasses with thick temples and frame constrict temporal visual field.

    PubMed

    Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric

    2013-12-01

    Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p < 0.05 were considered significant. A glare test was done using a surgical lighting system pointed at the eye(s) at different incidence angles. No significant "base visual field" or "eye motion visual field" surface area variations were noted when comparing tests done without glasses and with the "thin sunglasses." In contrast, a 22% "eye motion visual field" surface area decrease (p < 0.001) was noted when comparing tests done without glasses and with "thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p < 0.001). All subjects reported less lateral glare with the "thick sunglasses" than with the "thin sunglasses" (p < 0.001). The better protection from lateral glare offered by "thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.

  9. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  10. Postural and Spatial Orientation Driven by Virtual Reality

    PubMed Central

    Keshner, Emily A.; Kenyon, Robert V.

    2009-01-01

    Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796

  11. Ageing vision and falls: a review.

    PubMed

    Saftari, Liana Nafisa; Kwon, Oh-Sang

    2018-04-23

    Falls are the leading cause of accidental injury and death among older adults. One of three adults over the age of 65 years falls annually. As the size of elderly population increases, falls become a major concern for public health and there is a pressing need to understand the causes of falls thoroughly. While it is well documented that visual functions such as visual acuity, contrast sensitivity, and stereo acuity are correlated with fall risks, little attention has been paid to the relationship between falls and the ability of the visual system to perceive motion in the environment. The omission of visual motion perception in the literature is a critical gap because it is an essential function in maintaining balance. In the present article, we first review existing studies regarding visual risk factors for falls and the effect of ageing vision on falls. We then present a group of phenomena such as vection and sensory reweighting that provide information on how visual motion signals are used to maintain balance. We suggest that the current list of visual risk factors for falls should be elaborated by taking into account the relationship between visual motion perception and balance control.

  12. Looking at Op Art from a computational viewpoint.

    PubMed

    Zanker, Johannes M

    2004-01-01

    Arts history tells an exciting story about repeated attempts to represent features that are crucial for the understanding of our environment and which, at the same time, go beyond the inherently two-dimensional nature of a flat painting surface: depth and motion. In the twentieth century, Op artists such as Bridget Riley began to experiment with simple black and white patterns that do not represent motion in an artistic way but actually create vivid dynamic illusions in static pictures. The cause of motion illusions in such paintings is still a matter of debate. The role of involuntary eye movements in this phenomenon is studied here with a computational approach. The possible consequences of shifting the retinal image of synthetic wave gratings, dubbed as 'riloids', were analysed by a two-dimensional array of motion detectors (2DMD model), which generates response maps representing the spatial distribution of motion signals generated by such a stimulus. For a two-frame sequence reflecting a saccadic displacement, these motion signal maps contain extended patches in which local directions change only little. These directions, however, do not usually precisely correspond to the direction of pattern displacement that can be expected from the geometry of the curved gratings as an instance of the so-called 'aperture problem'. The patchy structure of the simulated motion detector response to the displacement of riloids resembles the motion illusion, which is not perceived as a coherent shift of the whole pattern but as a wobbling and jazzing of ill-defined regions. Although other explanations are not excluded, this might support the view that the puzzle of Op Art motion illusions could potentially have an almost trivial solution in terms of small involuntary eye movement leading to image shifts that are picked up by well-known motion detectors in the early visual system. This view can have further consequences for our understanding of how the human visual system usually compensates for eye movements, in order to let us perceive a stable world despite continuous image shifts generated by gaze instability.

  13. New insights into the role of motion and form vision in neurodevelopmental disorders.

    PubMed

    Johnston, Richard; Pitchford, Nicola J; Roach, Neil W; Ledgeway, Timothy

    2017-12-01

    A selective deficit in processing the global (overall) motion, but not form, of spatially extensive objects in the visual scene is frequently associated with several neurodevelopmental disorders, including preterm birth. Existing theories that proposed to explain the origin of this visual impairment are, however, challenged by recent research. In this review, we explore alternative hypotheses for why deficits in the processing of global motion, relative to global form, might arise. We describe recent evidence that has utilised novel tasks of global motion and global form to elucidate the underlying nature of the visual deficit reported in different neurodevelopmental disorders. We also examine the role of IQ and how the sex of an individual can influence performance on these tasks, as these are factors that are associated with performance on global motion tasks, but have not been systematically controlled for in previous studies exploring visual processing in clinical populations. Finally, we suggest that a new theoretical framework is needed for visual processing in neurodevelopmental disorders and present recommendations for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. The notion of the motion: the neurocognition of motion lines in visual narratives.

    PubMed

    Cohn, Neil; Maher, Stephen

    2015-03-19

    Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the "streaks" appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the "vocabulary" of the visual language of comics. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. The notion of the motion: The neurocognition of motion lines in visual narratives

    PubMed Central

    Cohn, Neil; Maher, Stephen

    2015-01-01

    Motion lines appear ubiquitously in graphic representation to depict the path of a moving object, most popularly in comics. Some researchers have argued that these graphic signs directly tie to the “streaks” appearing in the visual system when a viewer tracks an object (Burr, 2000), despite the fact that previous studies have been limited to offline measurements. Here, we directly examine the cognition of motion lines by comparing images in comic strips that depicted normal motion lines with those that either had no lines or anomalous, reversed lines. In Experiment 1, shorter viewing times appeared to images with normal lines than those with no lines, which were shorter than those with anomalous lines. In Experiment 2, measurements of event-related potentials (ERPs) showed that, compared to normal lines, panels with no lines elicited a posterior positivity that was distinct from the frontal positivity evoked by anomalous lines. These results suggested that motion lines aid in the comprehension of depicted events. LORETA source localization implicated greater activation of visual and language areas when understanding was made more difficult by anomalous lines. Furthermore, in both experiments, participants' experience reading comics modulated these effects, suggesting motion lines are not tied to aspects of the visual system, but rather are conventionalized parts of the “vocabulary” of the visual language of comics. PMID:25601006

  16. Development of Visual Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants

    PubMed Central

    Agyei, Seth B.; van der Weel, F. R. (Ruud); van der Meer, Audrey L. H.

    2016-01-01

    During infancy, smart perceptual mechanisms develop allowing infants to judge time-space motion dynamics more efficiently with age and locomotor experience. This emerging capacity may be vital to enable preparedness for upcoming events and to be able to navigate in a changing environment. Little is known about brain changes that support the development of prospective control and about processes, such as preterm birth, that may compromise it. As a function of perception of visual motion, this paper will describe behavioral and brain studies with young infants investigating the development of visual perception for prospective control. By means of the three visual motion paradigms of occlusion, looming, and optic flow, our research shows the importance of including behavioral data when studying the neural correlates of prospective control. PMID:26903908

  17. Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.

    PubMed

    Ibbotson, M R

    2017-01-23

    The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Multisensory Self-Motion Compensation During Object Trajectory Judgments

    PubMed Central

    Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.

    2015-01-01

    Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317

  19. Pure visual imagery as a potential approach to achieve three classes of control for implementation of BCI in non-motor disorders

    NASA Astrophysics Data System (ADS)

    Sousa, Teresa; Amaral, Carlos; Andrade, João; Pires, Gabriel; Nunes, Urbano J.; Castelo-Branco, Miguel

    2017-08-01

    Objective. The achievement of multiple instances of control with the same type of mental strategy represents a way to improve flexibility of brain-computer interface (BCI) systems. Here we test the hypothesis that pure visual motion imagery of an external actuator can be used as a tool to achieve three classes of electroencephalographic (EEG) based control, which might be useful in attention disorders. Approach. We hypothesize that different numbers of imagined motion alternations lead to distinctive signals, as predicted by distinct motion patterns. Accordingly, a distinct number of alternating sensory/perceptual signals would lead to distinct neural responses as previously demonstrated using functional magnetic resonance imaging (fMRI). We anticipate that differential modulations should also be observed in the EEG domain. EEG recordings were obtained from twelve participants using three imagery tasks: imagery of a static dot, imagery of a dot with two opposing motions in the vertical axis (two motion directions) and imagery of a dot with four opposing motions in vertical or horizontal axes (four directions). The data were analysed offline. Main results. An increase of alpha-band power was found in frontal and central channels as a result of visual motion imagery tasks when compared with static dot imagery, in contrast with the expected posterior alpha decreases found during simple visual stimulation. The successful classification and discrimination between the three imagery tasks confirmed that three different classes of control based on visual motion imagery can be achieved. The classification approach was based on a support vector machine (SVM) and on the alpha-band relative spectral power of a small group of six frontal and central channels. Patterns of alpha activity, as captured by single-trial SVM closely reflected imagery properties, in particular the number of imagined motion alternations. Significance. We found a new mental task based on visual motion imagery with potential for the implementation of multiclass (3) BCIs. Our results are consistent with the notion that frontal alpha synchronization is related with high internal processing demands, changing with the number of alternation levels during imagery. Together, these findings suggest the feasibility of pure visual motion imagery tasks as a strategy to achieve multiclass control systems with potential for BCI and in particular, neurofeedback applications in non-motor (attentional) disorders.

  20. Visualization of Kepler’s laws of planetary motion

    NASA Astrophysics Data System (ADS)

    Lu, Meishu; Su, Jun; Wang, Weiguo; Lu, Jianlong

    2017-03-01

    For this article, we use a 3D printer to print a surface similar to universal gravitation for demonstrating and investigating Kepler’s laws of planetary motion describing the motion of a small ball on the surface. This novel experimental method allows Kepler’s laws of planetary motion to be visualized and will contribute to improving the manipulative ability of middle school students and the accessibility of classroom education.

Top