Sample records for visually guided movements

  1. Correspondence of presaccadic activity in the monkey primary visual cortex with saccadic eye movements

    PubMed Central

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.

    2004-01-01

    We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334

  2. Rhythmic arm movements are less affected than discrete ones after a stroke.

    PubMed

    Leconte, Patricia; Orban de Xivry, Jean-Jacques; Stoquart, Gaëtan; Lejeune, Thierry; Ronsse, Renaud

    2016-06-01

    Recent reports indicate that rhythmic and discrete upper-limb movements are two different motor primitives which recruit, at least partially, distinct neural circuitries. In particular, rhythmic movements recruit a smaller cortical network than discrete movements. The goal of this paper is to compare the levels of disability in performing rhythmic and discrete movements after a stroke. More precisely, we tested the hypothesis that rhythmic movements should be less affected than discrete ones, because they recruit neural circuitries that are less likely to be damaged by the stroke. Eleven stroke patients and eleven age-matched control subjects performed discrete and rhythmic movements using an end-effector robot (REAplan). The rhythmic movement condition was performed with and without visual targets to further decrease cortical recruitment. Movement kinematics was analyzed through specific metrics, capturing the degree of smoothness and harmonicity. We reported three main observations: (1) the movement smoothness of the paretic arm was more severely degraded for discrete movements than rhythmic movements; (2) most of the patients performed rhythmic movements with a lower harmonicity than controls; and (3) visually guided rhythmic movements were more altered than non-visually guided rhythmic movements. These results suggest a hierarchy in the levels of impairment: Discrete movements are more affected than rhythmic ones, which are more affected if they are visually guided. These results are a new illustration that discrete and rhythmic movements are two fundamental primitives in upper-limb movements. Moreover, this hierarchy of impairment opens new post-stroke rehabilitation perspectives.

  3. A Review on Eye Movement Studies in Childhood and Adolescent Psychiatry

    ERIC Educational Resources Information Center

    Rommelse, Nanda N. J.; Van der Stigchel, Stefan; Sergeant, Joseph A.

    2008-01-01

    The neural substrates of eye movement measures are largely known. Therefore, measurement of eye movements in psychiatric disorders may provide insight into the underlying neuropathology of these disorders. Visually guided saccades, antisaccades, memory guided saccades, and smooth pursuit eye movements will be reviewed in various childhood…

  4. Stimulation of the substantia nigra influences the specification of memory-guided saccades

    PubMed Central

    Mahamed, Safraaz; Garrison, Tiffany J.; Shires, Joel

    2013-01-01

    In the absence of sensory information, we rely on past experience or memories to guide our actions. Because previous experimental and clinical reports implicate basal ganglia nuclei in the generation of movement in the absence of sensory stimuli, we ask here whether one output nucleus of the basal ganglia, the substantia nigra pars reticulata (nigra), influences the specification of an eye movement in the absence of sensory information to guide the movement. We manipulated the level of activity of neurons in the nigra by introducing electrical stimulation to the nigra at different time intervals while monkeys made saccades to different locations in two conditions: one in which the target location remained visible and a second in which the target location appeared only briefly, requiring information stored in memory to specify the movement. Electrical manipulation of the nigra occurring during the delay period of the task, when information about the target was maintained in memory, altered the direction and the occurrence of subsequent saccades. Stimulation during other intervals of the memory task or during the delay period of the visually guided saccade task had less effect on eye movements. On stimulated trials, and only when the visual stimulus was absent, monkeys occasionally (∼20% of the time) failed to make saccades. When monkeys made saccades in the absence of a visual stimulus, stimulation of the nigra resulted in a rotation of the endpoints ipsilaterally (∼2°) and increased the reaction time of contralaterally directed saccades. When the visual stimulus was present, stimulation of the nigra resulted in no significant rotation and decreased the reaction time of contralaterally directed saccades slightly. Based on these measurements, stimulation during the delay period of the memory-guided saccade task influenced the metrics of saccades much more than did stimulation during the same period of the visually guided saccade task. Because these effects occurred with manipulation of nigral activity well before the initiation of saccades and in trials in which the visual stimulus was absent, we conclude that information from the basal ganglia influences the specification of an action as it is evolving primarily during performance of memory-guided saccades. When visual information is available to guide the specification of the saccade, as occurs during visually guided saccades, basal ganglia information is less influential. PMID:24259551

  5. Memory-guided reaching in a patient with visual hemiagnosia.

    PubMed

    Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc

    2016-06-01

    The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Visually Guided Control of Movement

    NASA Technical Reports Server (NTRS)

    Johnson, Walter W. (Editor); Kaiser, Mary K. (Editor)

    1991-01-01

    The papers given at an intensive, three-week workshop on visually guided control of movement are presented. The participants were researchers from academia, industry, and government, with backgrounds in visual perception, control theory, and rotorcraft operations. The papers included invited lectures and preliminary reports of research initiated during the workshop. Three major topics are addressed: extraction of environmental structure from motion; perception and control of self motion; and spatial orientation. Each topic is considered from both theoretical and applied perspectives. Implications for control and display are suggested.

  7. Scene perception and the visual control of travel direction in navigating wood ants

    PubMed Central

    Collett, Thomas S.; Lent, David D.; Graham, Paul

    2014-01-01

    This review reflects a few of Mike Land's many and varied contributions to visual science. In it, we show for wood ants, as Mike has done for a variety of animals, including readers of this piece, what can be learnt from a detailed analysis of an animal's visually guided eye, head or body movements. In the case of wood ants, close examination of their body movements, as they follow visually guided routes, is starting to reveal how they perceive and respond to their visual world and negotiate a path within it. We describe first some of the mechanisms that underlie the visual control of their paths, emphasizing that vision is not the ant's only sense. In the second part, we discuss how remembered local shape-dependent and global shape-independent features of a visual scene may interact in guiding the ant's path. PMID:24395962

  8. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  9. Impairments in Tactile Search Following Superior Parietal Damage

    ERIC Educational Resources Information Center

    Skakoon-Sparling, Shayna P.; Vasquez, Brandon P.; Hano, Kate; Danckert, James

    2011-01-01

    The superior parietal cortex is critical for the control of visually guided actions. Research suggests that visual stimuli relevant to actions are preferentially processed when they are in peripersonal space. One recent study demonstrated that visually guided movements towards the body were more impaired in a patient with damage to superior…

  10. Evidence from Visuomotor Adaptation for Two Partially Independent Visuomotor Systems

    ERIC Educational Resources Information Center

    Thaler, Lore; Todd, James T.

    2010-01-01

    Visual information can specify spatial layout with respect to the observer (egocentric) or with respect to an external frame of reference (allocentric). People can use both of these types of visual spatial information to guide their hands. The question arises if movements based on egocentric and movements based on allocentric visual information…

  11. Tracking with the mind's eye

    NASA Technical Reports Server (NTRS)

    Krauzlis, R. J.; Stone, L. S.

    1999-01-01

    The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.

  12. Move with Me: A Parents' Guide to Movement Development for Visually Impaired Babies.

    ERIC Educational Resources Information Center

    Blind Childrens Center, Los Angeles, CA.

    This booklet presents suggestions for parents to promote their visually impaired infant's motor development. It is pointed out that babies with serious visual loss often prefer their world to be constant and familiar and may resist change (including change in position); therefore, it is important that a wide range of movement activities be…

  13. Visuomotor Map Determines How Visually Guided Reaching Movements are Corrected Within and Across Trials123

    PubMed Central

    Hirashima, Masaya

    2016-01-01

    Abstract When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation. PMID:27275006

  14. Visuomotor Map Determines How Visually Guided Reaching Movements are Corrected Within and Across Trials.

    PubMed

    Hayashi, Takuji; Yokoi, Atsushi; Hirashima, Masaya; Nozaki, Daichi

    2016-01-01

    When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation.

  15. Impaired visually guided weight-shifting ability in children with cerebral palsy.

    PubMed

    Ballaz, Laurent; Robert, Maxime; Parent, Audrey; Prince, François; Lemay, Martin

    2014-09-01

    The ability to control voluntary weight shifting is crucial in many functional tasks. To our knowledge, weight shifting ability in response to a visual stimulus has never been evaluated in children with cerebral palsy (CP). The aim of the study was (1) to propose a new method to assess visually guided medio-lateral (M/L) weight shifting ability and (2) to compare weight-shifting ability in children with CP and typically developing (TD) children. Ten children with spastic diplegic CP (Gross Motor Function Classification System level I and II; age 7-12 years) and 10 TD age-matched children were tested. Participants played with the skiing game on the Wii Fit game console. Center of pressure (COP) displacements, trunk and lower-limb movements were recorded during the last virtual slalom. Maximal isometric lower limb strength and postural control during quiet standing were also assessed. Lower-limb muscle strength was reduced in children with CP compared to TD children and postural control during quiet standing was impaired in children with CP. As expected, the skiing game mainly resulted in M/L COP displacements. Children with CP showed lower M/L COP range and velocity as compared to TD children but larger trunk movements. Trunk and lower extremity movements were less in phase in children with CP compared to TD children. Commercially available active video games can be used to assess visually guided weight shifting ability. Children with spastic diplegic CP showed impaired visually guided weight shifting which can be explained by non-optimal coordination of postural movement and reduced muscular strength. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Animal Preparations to Assess Neurophysiological Effects of Bio-Dynamic Environments.

    DTIC Science & Technology

    1980-07-17

    deprivation in preventing the acquisition of visually-guided behaviors. The next study examined acquisition of visually-guided behaviors in six animals...Maffei, L. and Bisti, S. Binocular interaction in strabismic kittens deprived of vision. Science, 191, 579-580, 1976. Matin, L. A possible hybrid...function in cat visual cortex following prolonged deprivation . Exp. Brain Res., 25 (1976) 139-156. Hein, A. Visually controlled components of movement

  17. Dynamic modulation of ocular orientation during visually guided saccades and smooth-pursuit eye movements

    NASA Technical Reports Server (NTRS)

    Hess, Bernhard J M.; Angelaki, Dora E.

    2003-01-01

    Rotational disturbances of the head about an off-vertical yaw axis induce a complex vestibuloocular reflex pattern that reflects the brain's estimate of head angular velocity as well as its estimate of instantaneous head orientation (at a reduced scale) in space coordinates. We show that semicircular canal and otolith inputs modulate torsional and, to a certain extent, also vertical ocular orientation of visually guided saccades and smooth-pursuit eye movements in a similar manner as during off-vertical axis rotations in complete darkness. It is suggested that this graviceptive control of eye orientation facilitates rapid visual spatial orientation during motion.

  18. Predictors of Verb-Mediated Anticipatory Eye Movements in the Visual World

    ERIC Educational Resources Information Center

    Hintz, Florian; Meyer, Antje S.; Huettig, Falk

    2017-01-01

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we…

  19. Short-Term Plasticity of the Visuomotor Map during Grasping Movements in Humans

    ERIC Educational Resources Information Center

    Safstrom, Daniel; Edin, Benoni B.

    2005-01-01

    During visually guided grasping movements, visual information is transformed into motor commands. This transformation is known as the "visuomotor map." To investigate limitations in the short-term plasticity of the visuomotor map in normal humans, we studied the maximum grip aperture (MGA) during the reaching phase while subjects grasped objects…

  20. The consummatory origins of visually guided reaching in human infants: a dynamic integration of whole-body and upper-limb movements.

    PubMed

    Foroud, Afra; Whishaw, Ian Q

    2012-06-01

    Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Retinotopic memory is more precise than spatiotopic memory.

    PubMed

    Golomb, Julie D; Kanwisher, Nancy

    2012-01-31

    Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.

  2. The Role of the Caudal Superior Parietal Lobule in Updating Hand Location in Peripheral Vision: Further Evidence from Optic Ataxia

    PubMed Central

    Granek, Joshua A.; Pisella, Laure; Blangero, Annabelle; Rossetti, Yves; Sergio, Lauren E.

    2012-01-01

    Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information. PMID:23071599

  3. Visuomotor signals for reaching movements in the rostro-dorsal sector of the monkey thalamic reticular nucleus.

    PubMed

    Saga, Yosuke; Nakayama, Yoshihisa; Inoue, Ken-Ichi; Yamagata, Tomoko; Hashimoto, Masashi; Tremblay, Léon; Takada, Masahiko; Hoshi, Eiji

    2017-05-01

    The thalamic reticular nucleus (TRN) collects inputs from the cerebral cortex and thalamus and, in turn, sends inhibitory outputs to the thalamic relay nuclei. This unique connectivity suggests that the TRN plays a pivotal role in regulating information flow through the thalamus. Here, we analyzed the roles of TRN neurons in visually guided reaching movements. We first used retrograde transneuronal labeling with rabies virus, and showed that the rostro-dorsal sector of the TRN (TRNrd) projected disynaptically to the ventral premotor cortex (PMv). In other experiments, we recorded neurons from the TRNrd or PMv while monkeys performed a visuomotor task. We found that neurons in the TRNrd and PMv showed visual-, set-, and movement-related activity modulation. These results indicate that the TRNrd, as well as the PMv, is involved in the reception of visual signals and in the preparation and execution of reaching movements. The fraction of neurons that were non-selective for the location of visual signals or the direction of reaching movements was greater in the TRNrd than in the PMv. Furthermore, the fraction of neurons whose activity increased from the baseline was greater in the TRNrd than in the PMv. The timing of activity modulation of visual-related and movement-related neurons was similar in TRNrd and PMv neurons. Overall, our data suggest that TRNrd neurons provide motor thalamic nuclei with inhibitory inputs that are predominantly devoid of spatial selectivity, and that these signals modulate how these nuclei engage in both sensory processing and motor output during visually guided reaching behavior. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Observers' cognitive states modulate how visual inputs relate to gaze control.

    PubMed

    Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G

    2016-09-01

    Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Neuronal activity in the lateral cerebellum of the cat related to visual stimuli at rest, visually guided step modification, and saccadic eye movements

    PubMed Central

    Marple-Horvat, D E; Criado, J M; Armstrong, D M

    1998-01-01

    The discharge patterns of 166 lateral cerebellar neurones were studied in cats at rest and during visually guided stepping on a horizontal circular ladder. A hundred and twelve cells were tested against one or both of two visual stimuli: a brief full-field flash of light delivered during eating or rest, and a rung which moved up as the cat approached. Forty-five cells (40%) gave a short latency response to one or both of these stimuli. These visually responsive neurones were found in hemispheral cortex (rather than paravermal) and the lateral cerebellar nucleus (rather than nucleus interpositus).Thirty-seven cells (of 103 tested, 36%) responded to flash. The cortical visual response (mean onset latency 38 ms) was usually an increase in Purkinje cell discharge rate, of around 50 impulses s−1 and representing 1 or 2 additional spikes per trial (1.6 on average). The nuclear response to flash (mean onset latency 27 ms) was usually an increased discharge rate which was shorter lived and converted rapidly to a depression of discharge or return to control levels, so that there were on average only an additional 0.6 spikes per trial. A straightforward explanation of the difference between the cortical and nuclear response would be that the increased inhibitory Purkinje cell output cuts short the nuclear response.A higher proportion of cells responded to rung movement, sixteen of twenty-five tested (64%). Again most responded with increased discharge, which had longer latency than the flash response (first change in dentate output ca 60 ms after start of movement) and longer duration. Peak frequency changes were twice the size of those in response to flash, at 100 impulses s−1 on average and additional spikes per trial were correspondingly 3–4 times higher. Both cortical and nuclear responses were context dependent, being larger when the rung moved when the cat was closer than further away.A quarter of cells (20 of 84 tested, 24%) modulated their activity in advance of saccades, increasing their discharge rate. Four-fifths of these were non-reciprocally directionally selective. Saccade-related neurones were usually susceptible to other influences, i.e. their activity was not wholly explicable in terms of saccade parameters.Substantial numbers of visually responsive neurones also discharged in relation to stepping movements while other visually responsive neurones discharged in advance of saccadic eye movements. And more than half the cells tested were active in relation both to eye movements and to stepping movements. These combinations of properties qualify even individual cerebellar neurones to participate in the co-ordination of visually guided eye and limb movements. PMID:9490874

  6. Visually Guided Step Descent in Children with Williams Syndrome

    ERIC Educational Resources Information Center

    Cowie, Dorothy; Braddick, Oliver; Atkinson, Janette

    2012-01-01

    Individuals with Williams syndrome (WS) have impairments in visuospatial tasks and in manual visuomotor control, consistent with parietal and cerebellar abnormalities. Here we examined whether individuals with WS also have difficulties in visually controlling whole-body movements. We investigated visual control of stepping down at a change of…

  7. Coordination of eye and head components of movements evoked by stimulation of the paramedian pontine reticular formation.

    PubMed

    Gandhi, Neeraj J; Barton, Ellen J; Sparks, David L

    2008-07-01

    Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibulo-ocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently.

  8. A probabilistic model of overt visual attention for cognitive robots.

    PubMed

    Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G

    2010-10-01

    Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.

  9. Optimizing wavefront-guided corrections for highly aberrated eyes in the presence of registration uncertainty

    PubMed Central

    Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.

    2013-01-01

    Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512

  10. The effect of sensory uncertainty due to amblyopia (lazy eye) on the planning and execution of visually-guided 3D reaching movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2012-01-01

    Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50-100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R(2)) which correlates the spatial position of the limb during the movement to endpoint position. Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R(2) values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis.

  11. The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Wong, Agnes M. F.

    2012-01-01

    Background Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position. Results Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis. PMID:22363549

  12. Guiding the mind's eye: improving communication and vision by external control of the scanpath

    NASA Astrophysics Data System (ADS)

    Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas

    2006-02-01

    Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.

  13. Eye movements, visual search and scene memory, in an immersive virtual environment.

    PubMed

    Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  14. Kinematics of Visually-Guided Eye Movements

    PubMed Central

    Hess, Bernhard J. M.; Thomassen, Jakob S.

    2014-01-01

    One of the hallmarks of an eye movement that follows Listing’s law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing’s law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing’s law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane. While the first term does not depend on eye position the second term does depend on eye position. We show that compounded rotation - displacements perfectly predict the average smooth kinematics of the eye during steady- state pursuit in both the position and velocity domain. PMID:24751602

  15. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  16. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  17. Selective weighting of action-related feature dimensions in visual working memory.

    PubMed

    Heuer, Anna; Schubö, Anna

    2017-08-01

    Planning an action primes feature dimensions that are relevant for that particular action, increasing the impact of these dimensions on perceptual processing. Here, we investigated whether action planning also affects the short-term maintenance of visual information. In a combined memory and movement task, participants were to memorize items defined by size or color while preparing either a grasping or a pointing movement. Whereas size is a relevant feature dimension for grasping, color can be used to localize the goal object and guide a pointing movement. The results showed that memory for items defined by size was better during the preparation of a grasping movement than during the preparation of a pointing movement. Conversely, memory for color tended to be better when a pointing movement rather than a grasping movement was being planned. This pattern was not only observed when the memory task was embedded within the preparation period of the movement, but also when the movement to be performed was only indicated during the retention interval of the memory task. These findings reveal that a weighting of information in visual working memory according to action relevance can even be implemented at the representational level during maintenance, demonstrating that our actions continue to influence visual processing beyond the perceptual stage.

  18. There may be more to reaching than meets the eye: re-thinking optic ataxia.

    PubMed

    Jackson, Stephen R; Newport, Roger; Husain, Masud; Fowlie, Jane E; O'Donoghue, Michael; Bajaj, Nin

    2009-05-01

    Optic ataxia (OA) is generally thought of as a disorder of visually guided reaching movements that cannot be explained by any simple deficit in visual or motor processing. In this paper we offer a new perspective on optic ataxia; we argue that the popular characterisation of this disorder is misleading and is unrepresentative of the pattern of reaching errors typically observed in OA patients. We begin our paper by reviewing recent neurophysiological, neuropsychological, and functional brain imaging studies that have led to the proposal that the medial parietal cortex in the vicinity of the parietal-occipital junction (POJ) - the key anatomical site associated with OA - represents reaching movements in eye-centred coordinates, and that this ability is impaired in optic ataxia. Our perspective stresses the importance of the POJ and superior parietal regions of the human PPC for representing reaching movements in both extrinsic (eye-centred) and intrinsic (postural) coordinates, and proposes that it is the ability to simultaneously represent multiple spatial locations that must be directly compared with one another that is impaired in non-foveal OA patients. In support of this idea we review recent fMRI and behavioural studies conducted by our group that have investigated the anatomical correlates of posturally guided movements, and the movements guided by postural cues in patients presenting with optic ataxia.

  19. Comparing Motor Skills in Autism Spectrum Individuals With and Without Speech Delay

    PubMed Central

    Barbeau, Elise B.; Meilleur, Andrée‐Anne S.; Zeffiro, Thomas A.

    2015-01-01

    Movement atypicalities in speed, coordination, posture, and gait have been observed across the autism spectrum (AS) and atypicalities in coordination are more commonly observed in AS individuals without delayed speech (DSM‐IV Asperger) than in those with atypical or delayed speech onset. However, few studies have provided quantitative data to support these mostly clinical observations. Here, we compared perceptual and motor performance between 30 typically developing and AS individuals (21 with speech delay and 18 without speech delay) to examine the associations between limb movement control and atypical speech development. Groups were matched for age, intelligence, and sex. The experimental design included: an inspection time task, which measures visual processing speed; the Purdue Pegboard, which measures finger dexterity, bimanual performance, and hand‐eye coordination; the Annett Peg Moving Task, which measures unimanual goal‐directed arm movement; and a simple reaction time task. We used analysis of covariance to investigate group differences in task performance and linear regression models to explore potential associations between intelligence, language skills, simple reaction time, and visually guided movement performance. AS participants without speech delay performed slower than typical participants in the Purdue Pegboard subtests. AS participants without speech delay showed poorer bimanual coordination than those with speech delay. Visual processing speed was slightly faster in both AS groups than in the typical group. Altogether, these results suggest that AS individuals with and without speech delay differ in visually guided and visually triggered behavior and show that early language skills are associated with slower movement in simple and complex motor tasks. Autism Res 2015, 8: 682–693. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:25820662

  20. The Generalization of Visuomotor Learning to Untrained Movements and Movement Sequences Based on Movement Vector and Goal Location Remapping

    PubMed Central

    Wu, Howard G.

    2013-01-01

    The planning of goal-directed movements is highly adaptable; however, the basic mechanisms underlying this adaptability are not well understood. Even the features of movement that drive adaptation are hotly debated, with some studies suggesting remapping of goal locations and others suggesting remapping of the movement vectors leading to goal locations. However, several previous motor learning studies and the multiplicity of the neural coding underlying visually guided reaching movements stand in contrast to this either/or debate on the modes of motor planning and adaptation. Here we hypothesize that, during visuomotor learning, the target location and movement vector of trained movements are separately remapped, and we propose a novel computational model for how motor plans based on these remappings are combined during the control of visually guided reaching in humans. To test this hypothesis, we designed a set of experimental manipulations that effectively dissociated the effects of remapping goal location and movement vector by examining the transfer of visuomotor adaptation to untrained movements and movement sequences throughout the workspace. The results reveal that (1) motor adaptation differentially remaps goal locations and movement vectors, and (2) separate motor plans based on these features are effectively averaged during motor execution. We then show that, without any free parameters, the computational model we developed for combining movement-vector-based and goal-location-based planning predicts nearly 90% of the variance in novel movement sequences, even when multiple attributes are simultaneously adapted, demonstrating for the first time the ability to predict how motor adaptation affects movement sequence planning. PMID:23804099

  1. Eye movements in interception with delayed visual feedback.

    PubMed

    Cámara, Clara; de la Malla, Cristina; López-Moliner, Joan; Brenner, Eli

    2018-07-01

    The increased reliance on electronic devices such as smartphones in our everyday life exposes us to various delays between our actions and their consequences. Whereas it is known that people can adapt to such delays, the mechanisms underlying such adaptation remain unclear. To better understand these mechanisms, the current study explored the role of eye movements in interception with delayed visual feedback. In two experiments, eye movements were recorded as participants tried to intercept a moving target with their unseen finger while receiving delayed visual feedback about their own movement. In Experiment 1, the target randomly moved in one of two different directions at one of two different velocities. The delay between the participant's finger movement and movement of the cursor that provided feedback about the finger movements was gradually increased. Despite the delay, participants followed the target with their gaze. They were quite successful at hitting the target with the cursor. Thus, they moved their finger to a position that was ahead of where they were looking. Removing the feedback showed that participants had adapted to the delay. In Experiment 2, the target always moved in the same direction and at the same velocity, while the cursor's delay varied across trials. Participants still always directed their gaze at the target. They adjusted their movement to the delay on each trial, often succeeding to intercept the target with the cursor. Since their gaze was always directed at the target, and they could not know the delay until the cursor started moving, participants must have been using peripheral vision of the delayed cursor to guide it to the target. Thus, people deal with delays by directing their gaze at the target and using both experience from previous trials (Experiment 1) and peripheral visual information (Experiment 2) to guide their finger in a way that will make the cursor hit the target.

  2. Basal Ganglia Neuronal Activity during Scanning Eye Movements in Parkinson’s Disease

    PubMed Central

    Sieger, Tomáš; Bonnet, Cecilia; Serranová, Tereza; Wild, Jiří; Novák, Daniel; Růžička, Filip; Urgošík, Dušan; Růžička, Evžen; Gaymard, Bertrand; Jech, Robert

    2013-01-01

    The oculomotor role of the basal ganglia has been supported by extensive evidence, although their role in scanning eye movements is poorly understood. Nineteen Parkinsońs disease patients, which underwent implantation of deep brain stimulation electrodes, were investigated with simultaneous intraoperative microelectrode recordings and single channel electrooculography in a scanning eye movement task by viewing a series of colored pictures selected from the International Affective Picture System. Four patients additionally underwent a visually guided saccade task. Microelectrode recordings were analyzed selectively from the subthalamic nucleus, substantia nigra pars reticulata and from the globus pallidus by the WaveClus program which allowed for detection and sorting of individual neurons. The relationship between neuronal firing rate and eye movements was studied by crosscorrelation analysis. Out of 183 neurons that were detected, 130 were found in the subthalamic nucleus, 30 in the substantia nigra and 23 in the globus pallidus. Twenty percent of the neurons in each of these structures showed eye movement-related activity. Neurons related to scanning eye movements were mostly unrelated to the visually guided saccades. We conclude that a relatively large number of basal ganglia neurons are involved in eye motion control. Surprisingly, neurons related to scanning eye movements differed from neurons activated during saccades suggesting functional specialization and segregation of both systems for eye movement control. PMID:24223158

  3. Eye Movements, Visual Search and Scene Memory, in an Immersive Virtual Environment

    PubMed Central

    Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency. PMID:24759905

  4. A novel computational model to probe visual search deficits during motor performance

    PubMed Central

    Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy

    2016-01-01

    Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596

  5. Memory-guided saccade processing in visual form agnosia (patient DF).

    PubMed

    Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika

    2010-01-01

    According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.

  6. Real-world visual search is dominated by top-down guidance.

    PubMed

    Chen, Xin; Zelinsky, Gregory J

    2006-11-01

    How do bottom-up and top-down guidance signals combine to guide search behavior? Observers searched for a target either with or without a preview (top-down manipulation) or a color singleton (bottom-up manipulation) among the display objects. With a preview, reaction times were faster and more initial eye movements were guided to the target; the singleton failed to attract initial saccades under these conditions. Only in the absence of a preview did subjects preferentially fixate the color singleton. We conclude that the search for realistic objects is guided primarily by top-down control. Implications for saliency map models of visual search are discussed.

  7. Transient visual pathway critical for normal development of primate grasping behavior.

    PubMed

    Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C; Vidaurre, Diego; Teo, Leon; Homman-Ludiye, Jihane; Goodale, Melvyn A; Leopold, David A; Bourne, James A

    2018-02-06

    An evolutionary hallmark of anthropoid primates, including humans, is the use of vision to guide precise manual movements. These behaviors are reliant on a specialized visual input to the posterior parietal cortex. Here, we show that normal primate reaching-and-grasping behavior depends critically on a visual pathway through the thalamic pulvinar, which is thought to relay information to the middle temporal (MT) area during early life and then swiftly withdraws. Small MRI-guided lesions to a subdivision of the inferior pulvinar subnucleus (PIm) in the infant marmoset monkey led to permanent deficits in reaching-and-grasping behavior in the adult. This functional loss coincided with the abnormal anatomical development of multiple cortical areas responsible for the guidance of actions. Our study reveals that the transient retino-pulvinar-MT pathway underpins the development of visually guided manual behaviors in primates that are crucial for interacting with complex features in the environment.

  8. Action Planning Mediates Guidance of Visual Attention from Working Memory.

    PubMed

    Feldmann-Wüstefeld, Tobias; Schubö, Anna

    2015-01-01

    Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences.

  9. Action Planning Mediates Guidance of Visual Attention from Working Memory

    PubMed Central

    Schubö, Anna

    2015-01-01

    Visual search is impaired when a salient task-irrelevant stimulus is presented together with the target. Recent research has shown that this attentional capture effect is enhanced when the salient stimulus matches working memory (WM) content, arguing in favor of attention guidance from WM. Visual attention was also shown to be closely coupled with action planning. Preparing a movement renders action-relevant perceptual dimensions more salient and thus increases search efficiency for stimuli sharing that dimension. The present study aimed at revealing common underlying mechanisms for selective attention, WM, and action planning. Participants both prepared a specific movement (grasping or pointing) and memorized a color hue. Before the movement was executed towards an object of the memorized color, a visual search task (additional singleton) was performed. Results showed that distraction from target was more pronounced when the additional singleton had a memorized color. This WM-guided attention deployment was more pronounced when participants prepared a grasping movement. We argue that preparing a grasping movement mediates attention guidance from WM content by enhancing representations of memory content that matches the distractor shape (i.e., circles), thus encouraging attentional capture by circle distractors of the memorized color. We conclude that templates for visual search, action planning, and WM compete for resources and thus cause interferences. PMID:26171241

  10. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    NASA Astrophysics Data System (ADS)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  11. Visual search for facial expressions of emotions: a comparison of dynamic and static faces.

    PubMed

    Horstmann, Gernot; Ansorge, Ulrich

    2009-02-01

    A number of past studies have used the visual search paradigm to examine whether certain aspects of emotional faces are processed preattentively and can thus be used to guide attention. All these studies presented static depictions of facial prototypes. Emotional expressions conveyed by the movement patterns of the face have never been examined for their preattentive effect. The present study presented for the first time dynamic facial expressions in a visual search paradigm. Experiment 1 revealed efficient search for a dynamic angry face among dynamic friendly faces, but inefficient search in a control condition with static faces. Experiments 2 to 4 suggested that this pattern of results is due to a stronger movement signal in the angry than in the friendly face: No (strong) advantage of dynamic over static faces is revealed when the degree of movement is controlled. These results show that dynamic information can be efficiently utilized in visual search for facial expressions. However, these results do not generally support the hypothesis that emotion-specific movement patterns are always preattentively discriminated. (c) 2009 APA, all rights reserved

  12. The use of peripheral vision to guide perturbation-evoked reach-to-grasp balance-recovery reactions

    PubMed Central

    King, Emily C.; McKay, Sandra M.; Cheng, Kenneth C.

    2016-01-01

    For a reach-to-grasp reaction to prevent a fall, it must be executed very rapidly, but with sufficient accuracy to achieve a functional grip. Recent findings suggest that the CNS may avoid potential time delays associated with saccade-guided arm movements by instead relying on peripheral vision (PV). However, studies of volitional arm movements have shown that reaching is slower and/or less accurate when guided by PV, rather than central vision (CV). The present study investigated how the CNS resolves speed-accuracy trade-offs when forced to use PV to guide perturbation-evoked reach-to-grasp balance-recovery reactions. These reactions were evoked, in 12 healthy young adults, via sudden unpredictable anteroposterior platform translation (barriers deterred stepping reactions). In PV trials, subjects were required to look straight-ahead at a visual target while a small cylindrical handhold (length 25%> hand-width) moved intermittently and unpredictably along a transverse axis before stopping at a visual angle of 20°, 30°, or 40°. The perturbation was then delivered after a random delay. In CV trials, subjects fixated on the handhold throughout the trial. A concurrent visuo-cognitive task was performed in 50% of PV trials but had little impact on reach-to-grasp timing or accuracy. Forced reliance on PV did not significantly affect response initiation times, but did lead to longer movement times, longer time-after-peak-velocity and less direct trajectories (compared to CV trials) at the larger visual angles. Despite these effects, forced reliance on PV did not compromise ability to achieve a functional grasp and recover equilibrium, for the moderately large perturbations and healthy young adults tested in this initial study. PMID:20957351

  13. What and where information in the caudate tail guides saccades to visual objects

    PubMed Central

    Yamamoto, Shinya; Monosov, Ilya E.; Yasuda, Masaharu; Hikosaka, Okihide

    2012-01-01

    We understand the world by making saccadic eye movements to various objects. However, it is unclear how a saccade can be aimed at a particular object, because two kinds of visual information, what the object is and where it is, are processed separately in the dorsal and ventral visual cortical pathways. Here we provide evidence suggesting that a basal ganglia circuit through the tail of the monkey caudate nucleus (CDt) guides such object-directed saccades. First, many CDt neurons responded to visual objects depending on where and what the objects were. Second, electrical stimulation in the CDt induced saccades whose directions matched the preferred directions of neurons at the stimulation site. Third, many CDt neurons increased their activity before saccades directed to the neurons’ preferred objects and directions in a free-viewing condition. Our results suggest that CDt neurons receive both ‘what’ and ‘where’ information and guide saccades to visual objects. PMID:22875934

  14. Semantic guidance of eye movements in real-world scenes

    PubMed Central

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914

  15. Semantic guidance of eye movements in real-world scenes.

    PubMed

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  17. A solution to the online guidance problem for targeted reaches: proportional rate control using relative disparity tau.

    PubMed

    Anderson, Joe; Bingham, Geoffrey P

    2010-09-01

    We provide a solution to a major problem in visually guided reaching. Research has shown that binocular vision plays an important role in the online visual guidance of reaching, but the visual information and strategy used to guide a reach remains unknown. We propose a new theory of visual guidance of reaching including a new information variable, tau(alpha) (relative disparity tau), and a novel control strategy that allows actors to guide their reach trajectories visually by maintaining a constant proportion between tau(alpha) and its rate of change. The dynamical model couples the information to the reaching movement to generate trajectories characteristic of human reaching. We tested the theory in two experiments in which participants reached under conditions of darkness to guide a visible point either on a sliding apparatus or on their finger to a point-light target in depth. Slider apparatus controlled for a simple mapping from visual to proprioceptive space. When reaching with their finger, participants were forced, by perturbation of visual information used for feedforward control, to use online control with only binocular disparity-based information for guidance. Statistical analyses of trajectories strongly supported the theory. Simulations of the model were compared statistically to actual reaching trajectories. The results supported the theory, showing that tau(alpha) provides a source of information for the control of visually guided reaching and that participants use this information in a proportional rate control strategy.

  18. A review on eye movement studies in childhood and adolescent psychiatry.

    PubMed

    Rommelse, Nanda N J; Van der Stigchel, Stefan; Sergeant, Joseph A

    2008-12-01

    The neural substrates of eye movement measures are largely known. Therefore, measurement of eye movements in psychiatric disorders may provide insight into the underlying neuropathology of these disorders. Visually guided saccades, antisaccades, memory guided saccades, and smooth pursuit eye movements will be reviewed in various childhood psychiatric disorders. The four aims of this review are (1) to give a thorough overview of eye movement studies in a wide array of psychiatric disorders occurring during childhood and adolescence (attention-deficit/hyperactivity disorder, oppositional deviant disorder and conduct disorder, autism spectrum disorders, reading disorder, childhood-onset schizophrenia, Tourette's syndrome, obsessive compulsive disorder, and anxiety and depression), (2) to discuss the specificity and overlap of eye movement findings across disorders and paradigms, (3) to discuss the developmental aspects of eye movement abnormalities in childhood and adolescence psychiatric disorders, and (4) to present suggestions for future research. In order to make this review of interest to a broad audience, attention will be given to the clinical manifestation of the disorders and the theoretical background of the eye movement paradigms.

  19. Gravity modulates Listing's plane orientation during both pursuit and saccades

    NASA Technical Reports Server (NTRS)

    Hess, Bernhard J M.; Angelaki, Dora E.

    2003-01-01

    Previous studies have shown that the spatial organization of all eye orientations during visually guided saccadic eye movements (Listing's plane) varies systematically as a function of static and dynamic head orientation in space. Here we tested if a similar organization also applies to the spatial orientation of eye positions during smooth pursuit eye movements. Specifically, we characterized the three-dimensional distribution of eye positions during horizontal and vertical pursuit (0.1 Hz, +/-15 degrees and 0.5 Hz, +/-8 degrees) at different eccentricities and elevations while rhesus monkeys were sitting upright or being statically tilted in different roll and pitch positions. We found that the spatial organization of eye positions during smooth pursuit depends on static orientation in space, similarly as during visually guided saccades and fixations. In support of recent modeling studies, these results are consistent with a role of gravity on defining the parameters of Listing's law.

  20. Context-dependent adaptation of visually-guided arm movements and vestibular eye movements: role of the cerebellum

    NASA Technical Reports Server (NTRS)

    Lewis, Richard F.

    2003-01-01

    Accurate motor control requires adaptive processes that correct for gradual and rapid perturbations in the properties of the controlled object. The ability to quickly switch between different movement synergies using sensory cues, referred to as context-dependent adaptation, is a subject of considerable interest at present. The potential function of the cerebellum in context-dependent adaptation remains uncertain, but the data reviewed below suggest that it may play a fundamental role in this process.

  1. There May Be More to Reaching than Meets the Eye: Re-Thinking Optic Ataxia

    ERIC Educational Resources Information Center

    Jackson, Stephen R.; Newport, Roger; Husain, Masud; Fowlie, Jane E.; O'Donoghue, Michael; Bajaj, Nin

    2009-01-01

    Optic ataxia (OA) is generally thought of as a disorder of visually guided reaching movements that cannot be explained by any simple deficit in visual or motor processing. In this paper we offer a new perspective on optic ataxia; we argue that the popular characterisation of this disorder is misleading and is unrepresentative of the pattern of…

  2. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    PubMed

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. The use of head/eye-centered, hand-centered and allocentric representations for visually guided hand movements and perceptual judgments.

    PubMed

    Thaler, Lore; Todd, James T

    2009-04-01

    Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.

  4. Separate visual representations for perception and for visually guided behavior

    NASA Technical Reports Server (NTRS)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  5. Teaching of Basic Posture Skills in Visually Impaired Individuals and Its Implementation under Aggravated Conditions

    ERIC Educational Resources Information Center

    Suveren-Erdogan, Ceren; Suveren, Sibel

    2018-01-01

    The aim of this study is to enable basic posture exercises to be included in the basic exercises of the visually impaired individuals as a step to learn more difficult movements, to guide the instructors in order to make efficient progress in a short time and to help more numbers of disabled individuals benefit from these studies. Method: 15…

  6. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  7. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097

  8. Saccadic eye movement during spaceflight

    NASA Technical Reports Server (NTRS)

    Uri, John J.; Linder, Barry J.; Moore, Thomas P.; Pool, Sam L.; Thornton, William E.

    1989-01-01

    Saccadic eye movements were studied in six subjects during two Space Shuttle missions. Reaction time, peak velocity and accuracy of horizontal, visually-guided saccades were examined preflight, inflight and postflight. Conventional electro-oculography was used to record eye position, with the subjects responding to pseudo-randomly illuminated targets at 0 deg and + or - 10 deg and 20 deg visual angles. In all subjects, preflight measurements were within normal limits. Reaction time was significantly increased inflight, while peak velocity was significantly decreased. A tendency toward a greater proportion of hypometric saccades inflight was also noted. Possible explanations for these changes and possible correlations with space motion sickness are discussed.

  9. Beyond scene gist: Objects guide search more than scene background.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. The effect of different brightness conditions on visually and memory guided saccades.

    PubMed

    Felßberg, Anna-Maria; Dombrowe, Isabel

    2018-01-01

    It is commonly assumed that saccades in the dark are slower than saccades in a lit room. Early studies that investigated this issue using electrooculography (EOG) often compared memory guided saccades in darkness to visually guided saccades in an illuminated room. However, later studies showed that memory guided saccades are generally slower than visually guided saccades. Research on this topic is further complicated by the fact that the different existing eyetracking methods do not necessarily lead to consistent measurements. In the present study, we independently manipulated task (memory guided/visually guided) and screen brightness (dark, medium and light) in an otherwise completely dark room, and measured the peak velocity and the duration of the participant's saccades using a popular pupil-cornea reflection (p-cr) eyetracker (Eyelink 1000). Based on a critical reading of the literature, including a recent study using cornea-reflection (cr) eye tracking, we did not expect any velocity or duration differences between the three brightness conditions. We found that memory guided saccades were generally slower than visually guided saccades. In both tasks, eye movements on a medium and light background were equally fast and had similar durations. However, saccades on the dark background were slower and had shorter durations, even after we corrected for the effect of pupil size changes. This means that this is most likely an artifact of current pupil-based eye tracking. We conclude that the common assumption that saccades in the dark are slower than in the light is probably not true, however pupil-based eyetrackers tend to underestimate the peak velocity of saccades on very dark backgrounds, creating the impression that this might be the case. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Figure-ground activity in V1 and guidance of saccadic eye movements.

    PubMed

    Supèr, Hans

    2006-01-01

    Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.

  12. An assessment of auditory-guided locomotion in an obstacle circumvention task.

    PubMed

    Kolarik, Andrew J; Scarfe, Amy C; Moore, Brian C J; Pardhan, Shahina

    2016-06-01

    This study investigated how effectively audition can be used to guide navigation around an obstacle. Ten blindfolded normally sighted participants navigated around a 0.6 × 2 m obstacle while producing self-generated mouth click sounds. Objective movement performance was measured using a Vicon motion capture system. Performance with full vision without generating sound was used as a baseline for comparison. The obstacle's location was varied randomly from trial to trial: it was either straight ahead or 25 cm to the left or right relative to the participant. Although audition provided sufficient information to detect the obstacle and guide participants around it without collision in the majority of trials, buffer space (clearance between the shoulder and obstacle), overall movement times, and number of velocity corrections were significantly (p < 0.05) greater with auditory guidance than visual guidance. Collisions sometime occurred under auditory guidance, suggesting that audition did not always provide an accurate estimate of the space between the participant and obstacle. Unlike visual guidance, participants did not always walk around the side that afforded the most space during auditory guidance. Mean buffer space was 1.8 times higher under auditory than under visual guidance. Results suggest that sound can be used to generate buffer space when vision is unavailable, allowing navigation around an obstacle without collision in the majority of trials.

  13. Oculomotor Evidence for Top-Down Control following the Initial Saccade

    PubMed Central

    Siebold, Alisha; van Zoest, Wieske; Donk, Mieke

    2011-01-01

    The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter. PMID:21931603

  14. SU-E-J-211: Design and Study of In-House Software Based Respiratory Motion Monitoring, Controlling and Breath-Hold Device for Gated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanmugam, Senthilkumar

    Purpose: The purpose of this present work was to fabricate an in-house software based respiratory monitoring, controlling and breath-hold device using computer software programme which guides the patient to have uniform breath hold in response to request during the gated radiotherapy. Methods: The respiratory controlling device consists of a computer, inhouse software, video goggles, a highly sensitive sensor for measurement of distance, mounting systems, a camera, a respiratory signal device, a speaker and a visual indicator. The computer is used to display the respiratory movements of the patient with digital as well as analogue respiration indicators during the respiration cycle,more » to control, breath-hold and analyze the respiratory movement using indigenously developed software. Results: Studies were conducted with anthropomophic phantoms by simulating the respiratory motion on phantoms and recording the respective movements using the respiratory monitoring device. The results show good agreement between the simulated and measured movements. Further studies were conducted for 60 cancer patients with several types of cancers in the thoracic region. The respiratory movement cycles for each fraction of radiotherapy treatment were recorded and compared. Alarm indications are provided in the system to indicate when the patient breathing movement exceeds the threshold level. This will help the patient to maintain uniform breath hold during the radiotherapy treatment. Our preliminary clinical test results indicate that our device is highly reliable and able to maintain the uniform respiratory motion and breathe hold during the entire course of gated radiotherapy treatment. Conclusion: An indigenous respiratory monitoring device to guide the patient to have uniform breath hold device was fabricated. The alarm feature and the visual waveform indicator in the system guide the patient to have normal respiration. The signal from the device can be connected to the radiation unit in near future to carry out the gated radiotherapy treatment.« less

  15. Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets

    PubMed Central

    Ding, Jinhong; Powell, David; Jiang, Yang

    2009-01-01

    When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603

  16. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics.

    PubMed

    Cheng, Sen; Sabes, Philip N

    2007-04-01

    The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.

  17. Eye Movements Affect Postural Control in Young and Older Females

    PubMed Central

    Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412

  18. Eye Movements Affect Postural Control in Young and Older Females.

    PubMed

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  19. Visual attention and stability

    PubMed Central

    Mathôt, Sebastiaan; Theeuwes, Jan

    2011-01-01

    In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world. PMID:21242140

  20. First saccadic eye movement reveals persistent attentional guidance by implicit learning

    PubMed Central

    Jiang, Yuhong V.; Won, Bo-Yeong; Swallow, Khena M.

    2014-01-01

    Implicit learning about where a visual search target is likely to appear often speeds up search. However, whether implicit learning guides spatial attention or affects post-search decisional processes remains controversial. Using eye tracking, this study provides compelling evidence that implicit learning guides attention. In a training phase, participants often found the target in a high-frequency, “rich” quadrant of the display. When subsequently tested in a phase during which the target was randomly located, participants were twice as likely to direct the first saccadic eye movement to the previously rich quadrant than to any of the sparse quadrants. The attentional bias persisted for nearly 200 trials after training and was unabated by explicit instructions to distribute attention evenly. We propose that implicit learning guides spatial attention but in a qualitatively different manner than goal-driven attention. PMID:24512610

  1. When viewing natural scenes, do abnormal colors impact on spatial or temporal parameters of eye movements?

    PubMed

    Ho-Phuoc, Tien; Guyader, Nathalie; Landragin, Frédéric; Guérin-Dugué, Anne

    2012-02-03

    Since Treisman's theory, it has been generally accepted that color is an elementary feature that guides eye movements when looking at natural scenes. Hence, most computational models of visual attention predict eye movements using color as an important visual feature. In this paper, using experimental data, we show that color does not affect where observers look when viewing natural scene images. Neither colors nor abnormal colors modify observers' fixation locations when compared to the same scenes in grayscale. In the same way, we did not find any significant difference between the scanpaths under grayscale, color, or abnormal color viewing conditions. However, we observed a decrease in fixation duration for color and abnormal color, and this was particularly true at the beginning of scene exploration. Finally, we found that abnormal color modifies saccade amplitude distribution.

  2. Goal-directed action is automatically biased towards looming motion

    PubMed Central

    Moher, Jeff; Sit, Jonathan; Song, Joo-Hyun

    2014-01-01

    It is known that looming motion can capture attention regardless of an observer’s intentions. Real-world behavior, however, frequently involves not just attentional selection, but selection for action. Thus, it is important to understand the impact of looming motion on goal-directed action to gain a broader perspective on how stimulus properties bias human behavior. We presented participants with a visually-guided reaching task in which they pointed to a target letter presented among non-target distractors. On some trials, one of the pre-masks at the location of the upcoming search objects grew rapidly in size, creating the appearance of a “looming” target or distractor. Even though looming motion did not predict the target location, the time required to reach to the target was shorter when the target loomed compared to when a distractor loomed. Furthermore, reach movement trajectories were pulled towards the location of a looming distractor when one was present, a pull that was greater still when the looming motion was on a collision path with the participant. We also contrast reaching data with data from a similarly designed visual search task requiring keypress responses. This comparison underscores the sensitivity of visually-guided reaching data, as some experimental manipulations, such as looming motion path, affected reach trajectories but not keypress measures. Together, the results demonstrate that looming motion biases visually-guided action regardless of an observer’s current behavioral goals, affecting not only the time required to reach to targets but also the path of the observer’s hand movement itself. PMID:25159287

  3. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    PubMed

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.

  4. Seeing your way to health: the visual pedagogy of Bess Mensendieck's physical culture system.

    PubMed

    Veder, Robin

    2011-01-01

    This essay examines the images and looking practices central to Bess M. Mensendieck's (c.1866-1959) 'functional exercise' system, as documented in physical culture treatises published in Germany and the United States between 1906 and 1937. Believing that muscular realignment could not occur without seeing how the body worked, Mensendieck taught adult non-athletes to see skeletal alignment and muscular movement in their own and others' bodies. Three levels of looking practices are examined: didactic sequences; penetrating inspection and appreciation of physiological structures; and ideokinetic visual metaphors for guiding movement. With these techniques, Mensendieck's work bridged the body cultures of German Nacktkultur (nudism), American labour efficiency and the emerging physical education profession. This case study demonstrates how sport historians could expand their analyses to include practices of looking as well as questions of visual representation.

  5. Involvement of the ventral premotor cortex in controlling image motion of the hand during performance of a target-capturing task.

    PubMed

    Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun

    2005-07-01

    The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.

  6. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations.

    PubMed

    Freud, Erez; Macdonald, Scott N; Chen, Juan; Quinlan, Derek J; Goodale, Melvyn A; Culham, Jody C

    2018-01-01

    In the current era of touchscreen technology, humans commonly execute visually guided actions directed to two-dimensional (2D) images of objects. Although real, three-dimensional (3D), objects and images of the same objects share high degree of visual similarity, they differ fundamentally in the actions that can be performed on them. Indeed, previous behavioral studies have suggested that simulated grasping of images relies on different representations than actual grasping of real 3D objects. Yet the neural underpinnings of this phenomena have not been investigated. Here we used functional magnetic resonance imaging (fMRI) to investigate how brain activation patterns differed for grasping and reaching actions directed toward real 3D objects compared to images. Multivoxel Pattern Analysis (MVPA) revealed that the left anterior intraparietal sulcus (aIPS), a key region for visually guided grasping, discriminates between both the format in which objects were presented (real/image) and the motor task performed on them (grasping/reaching). Interestingly, during action planning, the representations of real 3D objects versus images differed more for grasping movements than reaching movements, likely because grasping real 3D objects involves fine-grained planning and anticipation of the consequences of a real interaction. Importantly, this dissociation was evident in the planning phase, before movement initiation, and was not found in any other regions, including motor and somatosensory cortices. This suggests that the dissociable representations in the left aIPS were not based on haptic, motor or proprioceptive feedback. Together, these findings provide novel evidence that actions, particularly grasping, are affected by the realness of the target objects during planning, perhaps because real targets require a more elaborate forward model based on visual cues to predict the consequences of real manipulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Neuronal responses to target onset in oculomotor and somatomotor parietal circuits differ markedly in a choice task.

    PubMed

    Kubanek, J; Wang, C; Snyder, L H

    2013-11-01

    We often look at and sometimes reach for visible targets. Looking at a target is fast and relatively easy. By comparison, reaching for an object is slower and is associated with a larger cost. We hypothesized that, as a result of these differences, abrupt visual onsets may drive the circuits involved in saccade planning more directly and with less intermediate regulation than the circuits involved in reach planning. To test this hypothesis, we recorded discharge activity of neurons in the parietal oculomotor system (area LIP) and in the parietal somatomotor system (area PRR) while monkeys performed a visually guided movement task and a choice task. We found that in the visually guided movement task LIP neurons show a prominent transient response to target onset. PRR neurons also show a transient response, although this response is reduced in amplitude, is delayed, and has a slower rise time compared with LIP. A more striking difference is observed in the choice task. The transient response of PRR neurons is almost completely abolished and replaced with a slow buildup of activity, while the LIP response is merely delayed and reduced in amplitude. Our findings suggest that the oculomotor system is more closely and obligatorily coupled to the visual system, whereas the somatomotor system operates in a more discriminating manner.

  8. Visual cortex activation in kinesthetic guidance of reaching.

    PubMed

    Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J

    2007-06-01

    The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.

  9. The TINS Lecture. The parietal association cortex in depth perception and visual control of hand action.

    PubMed

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y

    1997-08-01

    Recent neurophysiological studies in alert monkeys have revealed that the parietal association cortex plays a crucial role in depth perception and visually guided hand movement. The following five classes of parietal neurons covering various aspects of these functions have been identified: (1) depth-selective visual-fixation (VF) neurons of the inferior parietal lobule (IPL), representing egocentric distance; (2) depth-movement sensitive (DMS) neurons of V5A and the ventral intraparietal (VIP) area representing direction of linear movement in 3-D space; (3) depth-rotation-sensitive (RS) neurons of V5A and the posterior parietal (PP) area representing direction of rotary movement in space; (4) visually responsive manipulation-related neurons (visual-dominant or visual-and-motor type) of the anterior intraparietal (AIP) area, representing 3-D shape or orientation (or both) of objects for manipulation; and (5) axis-orientation-selective (AOS) and surface-orientation-selective (SOS) neurons in the caudal intraparietal sulcus (cIPS) sensitive to binocular disparity and representing the 3-D orientation of the longitudinal axes and flat surfaces, respectively. Some AOS and SOS neurons are selective in both orientation and shape. Thus the dorsal visual pathway is divided into at least two subsystems, V5A, PP and VIP areas for motion vision and V6, LIP and cIPS areas for coding position and 3-D features. The cIPS sends the signals of 3-D features of objects to the AIP area, which is reciprocally connected to the ventral premotor (F5) area and plays an essential role in matching hand orientation and shaping with 3-D objects for manipulation.

  10. The Effect of Guided Imagery and Internal Visualization on Learning

    DTIC Science & Technology

    1987-01-01

    existance) of - . memory traces, and how retrival cues operate, to name a few. The lack of a single theory or a coherent approach has not deterred movement...and function (Subtask 3) to a 31% gain over the control group for information emphasizing the rote memory of sequential data (Subtask 1). Overall, the

  11. Saccadic Eye Movements in Adults with High-Functioning Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Zalla, Tiziana; Seassau, Magali; Cazalis, Fabienne; Gras, Doriane; Leboyer, Marion

    2018-01-01

    In this study, we examined the accuracy and dynamics of visually guided saccades in 20 adults with autism spectrum disorder, as compared to 20 typically developed adults using the Step/Overlap/Gap paradigms. Performances in participants with autistic spectrum disorder were characterized by preserved Gap/Overlap effect, but reduced gain and peak…

  12. Control of Visually Guided Saccades in Multiple Sclerosis: Disruption to Higher-Order Processes

    ERIC Educational Resources Information Center

    Fielding, Joanne; Kilpatrick, Trevor; Millist, Lynette; White, Owen

    2009-01-01

    Ocular motor abnormalities are a common feature of multiple sclerosis (MS), with more salient deficits reflecting tissue damage within brainstem and cerebellar circuits. However, MS may also result in disruption to higher level or cognitive control processes governing eye movement, including attentional processes that enhance the neural processing…

  13. Learning to Look: Probabilistic Variation and Noise Guide Infants' Eye Movements

    ERIC Educational Resources Information Center

    Tummeltshammer, Kristen Swan; Kirkham, Natasha Z.

    2013-01-01

    Young infants have demonstrated a remarkable sensitivity to probabilistic relations among visual features (Fiser & Aslin, 2002; Kirkham et al., 2002). Previous research has raised important questions regarding the usefulness of statistical learning in an environment filled with variability and noise, such as an infant's natural world. In…

  14. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  15. Creative Experiences: An Arts Curriculum for Young Children Including Those with Special Needs.

    ERIC Educational Resources Information Center

    Broughton, Belinda

    For use in any classroom or group setting for young children, this arts curriculum guide provides a total of 112 learning activities equally distributed across the areas of creative movement, drama, music, and visual arts. The activities are correlated with the Learning Accomplishment Profile (LAP), a developmental assessment instrument. Because…

  16. Using the Fine Arts to Teach Early Childhood Essential Elements.

    ERIC Educational Resources Information Center

    Education Service Center Region 11, Ft. Worth, TX.

    This extensive curriculum guide provides teachers of young children ages three to six with some specific lesson plans using the fine arts--music, drama, creative movement, and visual arts--to teach the "essential elements" in early childhood education. In addition, systematic, thorough evaluations of a variety of materials, kits, resource and…

  17. Four-dimensional in vivo X-ray microscopy with projection-guided gating

    NASA Astrophysics Data System (ADS)

    Mokso, Rajmund; Schwyn, Daniel A.; Walker, Simon M.; Doube, Michael; Wicklein, Martina; Müller, Tonya; Stampanoni, Marco; Taylor, Graham K.; Krapp, Holger G.

    2015-03-01

    Visualizing fast micrometer scale internal movements of small animals is a key challenge for functional anatomy, physiology and biomechanics. We combine phase contrast tomographic microscopy (down to 3.3 μm voxel size) with retrospective, projection-based gating (in the order of hundreds of microseconds) to improve the spatiotemporal resolution by an order of magnitude over previous studies. We demonstrate our method by visualizing 20 three-dimensional snapshots through the 150 Hz oscillations of the blowfly flight motor.

  18. The effects of lesions of the superior colliculus on locomotor orientation and the orienting reflex in the rat.

    PubMed

    Goodale, M A; Murison, R C

    1975-05-02

    The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.

  19. Design and test of a Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during training of upper limb movement.

    PubMed

    Simonsen, Daniel; Popovic, Mirjana B; Spaich, Erika G; Andersen, Ole Kæseler

    2017-11-01

    The present paper describes the design and test of a low-cost Microsoft Kinect-based system for delivering adaptive visual feedback to stroke patients during the execution of an upper limb exercise. Eleven sub-acute stroke patients with varying degrees of upper limb function were recruited. Each subject participated in a control session (repeated twice) and a feedback session (repeated twice). In each session, the subjects were presented with a rectangular pattern displayed on a vertical mounted monitor embedded in the table in front of the patient. The subjects were asked to move a marker inside the rectangular pattern by using their most affected hand. During the feedback session, the thickness of the rectangular pattern was changed according to the performance of the subject, and the color of the marker changed according to its position, thereby guiding the subject's movements. In the control session, the thickness of the rectangular pattern and the color of the marker did not change. The results showed that the movement similarity and smoothness was higher in the feedback session than in the control session while the duration of the movement was longer. The present study showed that adaptive visual feedback delivered by use of the Kinect sensor can increase the similarity and smoothness of upper limb movement in stroke patients.

  20. Truly hybrid interventional MR/X-ray system: investigation of in vivo applications.

    PubMed

    Fahrig, R; Butts, K; Wen, Z; Saunders, R; Kee, S T; Sze, D Y; Daniel, B L; Laerum, F; Pelc, N J

    2001-12-01

    The purpose of this study was to provide in vivo demonstrations of the functionality of a truly hybrid interventional x-ray/magnetic resonance (MR) system. A digital flat-panel x-ray system (1,024(2) array of 200 microm pixels, 30 frames per second) was integrated into an interventional 0.5-T magnet. The hybrid system is capable of MR and x-ray imaging of the same field of view without patient movement. Two intravascular procedures were performed in a 22-kg porcine model: placement of a transjugular intrahepatic portosystemic shunt (TIPS) (x-ray-guided catheterization of the hepatic vein, MR fluoroscopy-guided portal puncture, and x-ray-guided stent placement) and mock chemoembolization (x-ray-guided subselective catheterization of a renal artery branch and MR evaluation of perfused volume). The resolution and frame rate of the x-ray fluoroscopy images were sufficient to visualize and place devices, including nitinol guidewires (0.016-0.035-inch diameter) and stents and a 2.3-F catheter. Fifth-order branches of the renal artery could be seen. The quality of both real-time (3.5 frames per second) and standard MR images was not affected by the x-ray system. During MR-guided TIPS placement, the trocar and the portal vein could be easily visualized, allowing successful puncture from hepatic to portal vein. Switching back and forth between x-ray and MR imaging modalities without requiring movement of the patient was demonstrated. The integrated nature of the system could be especially beneficial when x-ray and MR image guidance are used iteratively.

  1. Evaluation of the Leap Motion Controller during the performance of visually-guided upper limb movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Gonzalez, David; Nouredanesh, Mina; Tung, James

    2018-01-01

    Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts' type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2-5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings.

  2. Evaluation of the Leap Motion Controller during the performance of visually-guided upper limb movements

    PubMed Central

    Gonzalez, David; Nouredanesh, Mina; Tung, James

    2018-01-01

    Kinematic analysis of upper limb reaching provides insight into the central nervous system control of movements. Until recently, kinematic examination of motor control has been limited to studies conducted in traditional research laboratories because motion capture equipment used for data collection is not easily portable and expensive. A recently developed markerless system, the Leap Motion Controller (LMC), is a portable and inexpensive tracking device that allows recording of 3D hand and finger position. The main goal of this study was to assess the concurrent reliability and validity of the LMC as compared to the Optotrak, a criterion-standard motion capture system, for measures of temporal accuracy and peak velocity during the performance of upper limb, visually-guided movements. In experiment 1, 14 participants executed aiming movements to visual targets presented on a computer monitor. Bland-Altman analysis was conducted to assess the validity and limits of agreement for measures of temporal accuracy (movement time, duration of deceleration interval), peak velocity, and spatial accuracy (endpoint accuracy). In addition, a one-sample t-test was used to test the hypothesis that the error difference between measures obtained from Optotrak and LMC is zero. In experiment 2, 15 participants performed a Fitts’ type aiming task in order to assess whether the LMC is capable of assessing a well-known speed-accuracy trade-off relationship. Experiment 3 assessed the temporal coordination pattern during the performance of a sequence consisting of a reaching, grasping, and placement task in 15 participants. Results from the t-test showed that the error difference in temporal measures was significantly different from zero. Based on the results from the 3 experiments, the average temporal error in movement time was 40±44 ms, and the error in peak velocity was 0.024±0.103 m/s. The limits of agreement between the LMC and Optotrak for spatial accuracy measures ranged between 2–5 cm. Although the LMC system is a low-cost, highly portable system, which could facilitate collection of kinematic data outside of the traditional laboratory settings, the temporal and spatial errors may limit the use of the device in some settings. PMID:29529064

  3. Learning optimal eye movements to unusual faces

    PubMed Central

    Peterson, Matthew F.; Eckstein, Miguel P.

    2014-01-01

    Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712

  4. “Left Neglected,” but Only in Far Space: Spatial Biases in Healthy Participants Revealed in a Visually Guided Grasping Task

    PubMed Central

    de Bruin, Natalie; Bryant, Devon C.; Gonzalez, Claudia L. R.

    2014-01-01

    Hemispatial neglect is a common outcome of stroke that is characterized by the inability to orient toward, and attend to stimuli in contralesional space. It is established that hemispatial neglect has a perceptual component, however, the presence and severity of motor impairments is controversial. Establishing the nature of space use and spatial biases during visually guided actions amongst healthy individuals is critical to understanding the presence of visuomotor deficits in patients with neglect. Accordingly, three experiments were conducted to investigate the effect of object spatial location on patterns of grasping. Experiment 1 required right-handed participants to reach and grasp for blocks in order to construct 3D models. The blocks were scattered on a tabletop divided into equal size quadrants: left near, left far, right near, and right far. Identical sets of building blocks were available in each quadrant. Space use was dynamic, with participants initially grasping blocks from right near space and tending to “neglect” left far space until the final stages of the task. Experiment 2 repeated the protocol with left-handed participants. Remarkably, left-handed participants displayed a similar pattern of space use to right-handed participants. In Experiment 3 eye movements were examined to investigate whether “neglect” for grasping in left far reachable space had its origins in attentional biases. It was found that patterns of eye movements mirrored patterns of reach-to-grasp movements. We conclude that there are spatial biases during visually guided grasping, specifically, a tendency to neglect left far reachable space, and that this “neglect” is attentional in origin. The results raise the possibility that visuomotor impairments reported among patients with right hemisphere lesions when working in contralesional space may result in part from this inherent tendency to “neglect” left far space irrespective of the presence of unilateral visuospatial neglect. PMID:24478751

  5. Visual and haptic integration in the estimation of softness of deformable objects

    PubMed Central

    Cellini, Cristiano; Kaim, Lukas; Drewing, Knut

    2013-01-01

    Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510

  6. Sensory signals and neuronal groups involved in guiding the sea-ward motor behavior in turtle hatchlings of Chelonia agassizi

    NASA Astrophysics Data System (ADS)

    Fuentes, A. L.; Camarena, V.; Ochoa, G.; Urrutia, J.; Gutierrez, G.

    2007-05-01

    Turtle hatchlings orient display sea-ward oriented movements as soon as they emerge from the nest. Although most studies have emphasized the role of the visual information in this process, less attention has been paid to other sensory modalities. Here, we evaluated the nature of sensory cues used by turtle hatchlings of Chelonia agassizi to orient their movements towards the ocean. We recorded the time they took to crawl from the nest to the beach front (120m long) in control conditions and in visually, olfactory and magnetically deprived circumstances. Visually-deprived hatchlings displayed a high degree of disorientation. Olfactory deprivation and magnetic field distortion impaired, but not abolished, sea-ward oriented movements. With regard to the neuronal mapping experiments, visual deprivation reduced dramatically c-fos expression in the whole brain. Hatchlings with their nares blocked revealed neurons with c-fos expression above control levels principally in the c and d areas, while those subjected to magnetic field distortion had a wide spread activation of neurons throughout the brain predominantly in the dorsal ventricular ridge The present results support that Chelonia agassizi hatchlings use predominantly visual cues to orient their movements towards the sea. Olfactory and magnetic cues may also be use but their influence on hatchlings oriented motor behavior is not as clear as it is for vision. This conclusion is supported by the fact that in the absence of olfactory and magnetic cues, the brain turns on the expression of c- fos in neuronal groups that, in the intact hatchling, are not normally involved in accomplishing the task.

  7. Hand shape selection in pantomimed grasping: Interaction between the dorsal and the ventral visual streams and convergence on the ventral premotor area

    PubMed Central

    Makuuchi, Michiru; Someya, Yoshiaki; Ogawa, Seiji; Takayama, Yoshihiro

    2011-01-01

    In visually guided grasping, possible hand shapes are computed from the geometrical features of the object, while prior knowledge about the object and the goal of the action influence both the computation and the selection of the hand shape. We investigated the system dynamics of the human brain for the pantomiming of grasping with two aspects accentuated. One is object recognition, with the use of objects for daily use. The subjects mimed grasping movements appropriate for an object presented in a photograph either by precision or power grip. The other is the selection of grip hand shape. We manipulated the selection demands for the grip hand shape by having the subjects use the same or different grip type in the second presentation of the identical object. Effective connectivity analysis revealed that the increased selection demands enhance the interaction between the anterior intraparietal sulcus (AIP) and posterior inferior temporal gyrus (pITG), and drive the converging causal influences from the AIP, pITG, and dorsolateral prefrontal cortex to the ventral premotor area (PMv). These results suggest that the dorsal and ventral visual areas interact in the pantomiming of grasping, while the PMv integrates the neural information of different regions to select the hand posture. The present study proposes system dynamics in visually guided movement toward meaningful objects, but further research is needed to examine if the same dynamics is found also in real grasping. PMID:21739528

  8. Biological Motion Preference in Humans at Birth: Role of Dynamic and Configural Properties

    ERIC Educational Resources Information Center

    Bardi, Lara; Regolin, Lucia; Simion, Francesca

    2011-01-01

    The present study addresses the hypothesis that detection of biological motion is an intrinsic capacity of the visual system guided by a non-species-specific predisposition for the pattern of vertebrate movement and investigates the role of global vs. local information in biological motion detection. Two-day-old babies exposed to a biological…

  9. The interaction of Bayesian priors and sensory data and its neural circuit implementation in visually-guided movement

    PubMed Central

    Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.

    2012-01-01

    Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286

  10. Independent development of the Reach and the Grasp in spontaneous self-touching by human infants in the first 6 months.

    PubMed

    Thomas, Brittany L; Karl, Jenni M; Whishaw, Ian Q

    2014-01-01

    The Dual Visuomotor Channel Theory proposes that visually guided reaching is a composite of two movements, a Reach that advances the hand to contact the target and a Grasp that shapes the digits for target purchase. The theory is supported by biometric analyses of adult reaching, evolutionary contrasts, and differential developmental patterns for the Reach and the Grasp in visually guided reaching in human infants. The present ethological study asked whether there is evidence for a dissociated development for the Reach and the Grasp in nonvisual hand use in very early infancy. The study documents a rich array of spontaneous self-touching behavior in infants during the first 6 months of life and subjected the Reach movements to an analysis in relation to body target, contact type, and Grasp. Video recordings were made of resting alert infants biweekly from birth to 6 months. In younger infants, self-touching targets included the head and trunk. As infants aged, targets became more caudal and included the hips, then legs, and eventually the feet. In younger infants hand contact was mainly made with the dorsum of the hand, but as infants aged, contacts included palmar contacts and eventually grasp and manipulation contacts with the body and clothes. The relative incidence of caudal contacts and palmar contacts increased concurrently and were significantly correlated throughout the period of study. Developmental increases in self-grasping contacts occurred a few weeks after the increase in caudal and palmar contacts. The behavioral and temporal pattern of these spontaneous self-touching movements suggest that the Reach, in which the hand extends to make a palmar self-contact, and the Grasp, in which the digits close and make manipulatory movements, have partially independent developmental profiles. The results additionally suggest that self-touching behavior is an important developmental phase that allows the coordination of the Reach and the Grasp prior to and concurrent with their use under visual guidance.

  11. Independence of Movement Preparation and Movement Initiation.

    PubMed

    Haith, Adrian M; Pakpoor, Jina; Krakauer, John W

    2016-03-09

    Initiating a movement in response to a visual stimulus takes significantly longer than might be expected on the basis of neural transmission delays, but it is unclear why. In a visually guided reaching task, we forced human participants to move at lower-than-normal reaction times to test whether normal reaction times are strictly necessary for accurate movement. We found that participants were, in fact, capable of moving accurately ∼80 ms earlier than their reaction times would suggest. Reaction times thus include a seemingly unnecessary delay that accounts for approximately one-third of their duration. Close examination of participants' behavior in conventional reaction-time conditions revealed that they generated occasional, spontaneous errors in trials in which their reaction time was unusually short. The pattern of these errors could be well accounted for by a simple model in which the timing of movement initiation is independent of the timing of movement preparation. This independence provides an explanation for why reaction times are usually so sluggish: delaying the mean time of movement initiation relative to preparation reduces the risk that a movement will be initiated before it has been appropriately prepared. Our results suggest that preparation and initiation of movement are mechanistically independent and may have a distinct neural basis. The results also demonstrate that, even in strongly stimulus-driven tasks, presentation of a stimulus does not directly trigger a movement. Rather, the stimulus appears to trigger an internal decision whether to make a movement, reflecting a volitional rather than reactive mode of control. Copyright © 2016 the authors 0270-6474/16/363007-10$15.00/0.

  12. Octopus vulgaris uses visual information to determine the location of its arm.

    PubMed

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  14. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system.

    PubMed

    Mender, Bedeho M W; Stringer, Simon M

    2015-01-01

    We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.

  15. A self-organizing model of perisaccadic visual receptive field dynamics in primate visual and oculomotor system

    PubMed Central

    Mender, Bedeho M. W.; Stringer, Simon M.

    2015-01-01

    We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301

  16. Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information

    PubMed Central

    Strauss, Soeren; Woodgate, Philip J.W.; Sami, Saber A.; Heinke, Dietmar

    2015-01-01

    We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain’s attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO’s predictions and also lessons for neurobiologically inspired robotics emerging from this work. PMID:26667353

  17. Role of the Visuomotor System in On-Line Attenuation of a Premovement Illusory Bias in Grip Aperture

    ERIC Educational Resources Information Center

    Heath, M.; Rival, C.

    2005-01-01

    In this investigation participants formulated a grip aperture (GA) consistent with the size of an object embedded within a Muller-Lyer (ML) figure prior to initiating visually guided grasping movements. The accuracy of the grasping response was emphasized to determine whether or not the visuomotor system might resolve the premovement bias in GA…

  18. More than Just Finding Color: Strategy in Global Visual Search Is Shaped by Learned Target Probabilities

    ERIC Educational Resources Information Center

    Williams, Carrick C.; Pollatsek, Alexander; Cave, Kyle R.; Stroud, Michael J.

    2009-01-01

    In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue "T") was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though…

  19. Cat and mouse search: the influence of scene and object analysis on eye movements when targets change locations during search.

    PubMed

    Hillstrom, Anne P; Segabinazi, Joice D; Godwin, Hayward J; Liversedge, Simon P; Benson, Valerie

    2017-02-19

    We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more quickly when it appeared in a likely location than when it appeared in an unlikely or implausible location. The findings show that both scene gist and object properties are extracted rapidly, and are used in conjunction to guide saccadic eye movements during visual search.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  20. Getting a grip: different actions and visual guidance of the thumb and finger in precision grasping.

    PubMed

    Melmoth, Dean R; Grant, Simon

    2012-10-01

    We manipulated the visual information available for grasping to examine what is visually guided when subjects get a precision grip on a common class of object (upright cylinders). In Experiment 1, objects (2 sizes) were placed at different eccentricities to vary the relative proximity to the participant's (n = 6) body of their thumb and finger contact positions in the final grip orientations, with vision available throughout or only for movement programming. Thumb trajectories were straighter and less variable than finger paths, and the thumb normally made initial contact with the objects at a relatively invariant landing site, but consistent thumb first-contacts were disrupted without visual guidance. Finger deviations were more affected by the object's properties and increased when vision was unavailable after movement onset. In Experiment 2, participants (n = 12) grasped 'glow-in-the-dark' objects wearing different luminous gloves in which the whole hand was visible or the thumb or the index finger was selectively occluded. Grip closure times were prolonged and thumb first-contacts disrupted when subjects could not see their thumb, whereas occluding the finger resulted in wider grips at contact because this digit remained distant from the object. Results were together consistent with visual feedback guiding the thumb in the period just prior to contacting the object, with the finger more involved in opening the grip and avoiding collision with the opposite contact surface. As people can overtly fixate only one object contact point at a time, we suggest that selecting one digit for online guidance represents an optimal strategy for initial grip placement. Other grasping tasks, in which the finger appears to be used for this purpose, are discussed.

  1. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  2. A comparative analysis of speed profile models for wrist pointing movements.

    PubMed

    Vaisman, Lev; Dipietro, Laura; Krebs, Hermano Igo

    2013-09-01

    Following two decades of design and clinical research on robot-mediated therapy for the shoulder and elbow, therapeutic robotic devices for other joints are being proposed: several research groups including ours have designed robots for the wrist, either to be used as stand-alone devices or in conjunction with shoulder and elbow devices. However, in contrast with robots for the shoulder and elbow which were able to take advantage of descriptive kinematic models developed in neuroscience for the past 30 years, design of wrist robots controllers cannot rely on similar prior art: wrist movement kinematics has been largely unexplored. This study aimed at examining speed profiles of fast, visually evoked, visually guided, target-directed human wrist pointing movements. One thousand three-hundred ninety-eight (1398) trials were recorded from seven unimpaired subjects who performed center-out flexion/extension and abduction/adduction wrist movements and fitted with 19 models previously proposed for describing reaching speed profiles. A nonlinear, least squares optimization procedure extracted parameters' sets that minimized error between experimental and reconstructed data. Models' performances were compared based on their ability to reconstruct experimental data. Results suggest that the support-bounded lognormal is the best model for speed profiles of fast, wrist pointing movements. Applications include design of control algorithms for therapeutic wrist robots and quantitative metrics of motor recovery.

  3. Modelling eye movements in a categorical search task

    PubMed Central

    Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris

    2013-01-01

    We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720

  4. Predictors of verb-mediated anticipatory eye movements in the visual world.

    PubMed

    Hintz, Florian; Meyer, Antje S; Huettig, Falk

    2017-09-01

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of 5 potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners' production fluency, receptive vocabulary knowledge, and nonverbal intelligence. In 3 eye-tracking experiments, participants looked at sets of 4 objects and listened to sentences where the final word was predictable or not predictable (e.g., "The man peels/draws an apple"). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and nonverbal intelligence was only a very weak predictor of anticipatory eye movements. Participants' production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Preliminary study of ergonomic behavior during simulated ultrasound-guided regional anesthesia using a head-mounted display.

    PubMed

    Udani, Ankeet D; Harrison, T Kyle; Howard, Steven K; Kim, T Edward; Brock-Utne, John G; Gaba, David M; Mariano, Edward R

    2012-08-01

    A head-mounted display provides continuous real-time imaging within the practitioner's visual field. We evaluated the feasibility of using head-mounted display technology to improve ergonomics in ultrasound-guided regional anesthesia in a simulated environment. Two anesthesiologists performed an equal number of ultrasound-guided popliteal-sciatic nerve blocks using the head-mounted display on a porcine hindquarter, and an independent observer assessed each practitioner's ergonomics (eg, head turning, arching, eye movements, and needle manipulation) and the overall block quality based on the injectate spread around the target nerve for each procedure. Both practitioners performed their procedures without directly viewing the ultrasound monitor, and neither practitioner showed poor ergonomic behavior. Head-mounted display technology may offer potential advantages during ultrasound-guided regional anesthesia.

  6. For a Child, Life is a Creative Adventure: Supporting Development and Learning through Art, Music, Movement, and Dialogue. A Guide for Parents and Professionals. = Para los ninos, la vida es una aventura creativa: Como estimular el desarrollo y el aprendizaje por medio de las artes visuales, la musica, el movimiento y el dialogo. Guia para padres de familia y profesionales.

    ERIC Educational Resources Information Center

    Cohen, Elena

    Recognizing that creativity facilitates children's learning and development, the Head Start Program Performance Standards require Head Start programs to include opportunities for creative self-expression. This guide with accompanying videotape, both in English- and Spanish- language versions, encourages and assists adults to support children's…

  7. [Attention to speed and guide traffic signs with eye movements].

    PubMed

    Conchillo Jiménez, Ángela; Pérez-Moreno, Elisa; Recarte Goldaracena, Miguel Ángel

    2010-11-01

    The goal of this research is to describe the visual search patterns for diverse traffic signs. Twelve drivers of both genders and different driving experience levels took part in real driving research with an instrumented car provided with an eye-tracking system. Looking at signs has a weak relation with speed reduction in cases where actual driving speed was higher. Nevertheless, among the people who looked at the sign, the percentage of those who reduce the speed below the limit is greater than of those who do not look at the sign. Guide traffic signs, particularly those mounted over the road, are more frequently glanced at than speed limit signs, with a glance duration of more than one second, in sequences of more than two consecutive fixations. Implications for driving and the possibilities and limitations of eye movement analysis for traffic sign research are discussed.

  8. Similar prevalence and magnitude of auditory-evoked and visually evoked activity in the frontal eye fields: implications for multisensory motor control.

    PubMed

    Caruso, Valeria C; Pages, Daniel S; Sommer, Marc A; Groh, Jennifer M

    2016-06-01

    Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. Copyright © 2016 the American Physiological Society.

  9. Neglect assessment as an application of virtual reality.

    PubMed

    Broeren, J; Samuelsson, H; Stibrant-Sunnerhagen, K; Blomstrand, C; Rydmark, M

    2007-09-01

    In this study a cancellation task in a virtual environment was applied to describe the pattern of search and the kinematics of hand movements in eight patients with right hemisphere stroke. Four of these patients had visual neglect and four had recovered clinically from initial symptoms of neglect. The performance of the patients was compared with that of a control group consisting of eight subjects with no history of neurological deficits. Patients with neglect as well as patients clinically recovered from neglect showed aberrant search performance in the virtual reality (VR) task, such as mixed search pattern, repeated target pressures and deviating hand movements. The results indicate that in patients with a right hemispheric stroke, this VR application can provide an additional tool for assessment that can identify small variations otherwise not detectable with standard paper-and-pencil tests. VR technology seems to be well suited for the assessment of visually guided manual exploration in space.

  10. Eye movement difficulties in autism spectrum disorder: implications for implicit contextual learning.

    PubMed

    Kourkoulou, Anastasia; Kuhn, Gustav; Findlay, John M; Leekam, Susan R

    2013-06-01

    It is widely accepted that we use contextual information to guide our gaze when searching for an object. People with autism spectrum disorder (ASD) also utilise contextual information in this way; yet, their visual search in tasks of this kind is much slower compared with people without ASD. The aim of the current study was to explore the reason for this by measuring eye movements. Eye movement analyses revealed that the slowing of visual search was not caused by making a greater number of fixations. Instead, participants in the ASD group were slower to launch their first saccade, and the duration of their fixations was longer. These results indicate that slowed search in ASD in contextual learning tasks is not due to differences in the spatial allocation of attention but due to temporal delays in the initial-reflexive orienting of attention and subsequent-focused attention. These results have broader implications for understanding the unusual attention profile of individuals with ASD and how their attention may be shaped by learning. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  11. Planning of visually guided reach-to-grasp movements: inference from reaction time and contingent negative variation (CNV).

    PubMed

    Zaepffel, Manuel; Brochier, Thomas

    2012-01-01

    We performed electroencephalogram (EEG) recording in a precuing task to investigate the planning processes of reach-to-grasp movements in human. In this reaction time (RT) task, subjects had to reach, grasp, and pull an object as fast as possible after a visual GO signal. We manipulated two parameters: the hand shape for grasping (precision grip or side grip) and the force required to pull the object (high or low). Three seconds before the GO onset, a cue provided advance information about force, grip, both parameters, or no information at all. EEG data show that reach-to-grasp movements generate differences in the topographic distribution of the late Contingent Negative Variation (ICNV) amplitude between the 4 precuing conditions. Along with RT data, it confirms that two distinct functional networks are involved with different time courses in the planning of grip and force. Finally, we outline the composite nature of the lCNV that might reflect both high- and low-level planning processes. Copyright © 2011 Society for Psychophysiological Research.

  12. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  13. MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.

    PubMed

    Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T

    1999-06-01

    We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.

  14. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  15. Temporal and peripheral extraction of contextual cues from scenes during visual search.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-02-01

    Scene context is known to facilitate object recognition and guide visual search, but little work has focused on isolating image-based cues and evaluating their contributions to eye movement guidance and search performance. Here, we explore three types of contextual cues (a co-occurring object, the configuration of other objects, and the superordinate category of background elements) and assess their joint contributions to search performance in the framework of cue-combination and the temporal unfolding of their extraction. We also assess whether observers' ability to extract each contextual cue in the visual periphery is a bottleneck that determines the utilization and contribution of each cue to search guidance and decision accuracy. We find that during the first four fixations of a visual search task observers first utilize the configuration of objects for coarse eye movement guidance and later use co-occurring object information for finer guidance. In the absence of contextual cues, observers were suboptimally biased to report the target object as being absent. The presence of the co-occurring object was the only contextual cue that had a significant effect in reducing decision bias. The early influence of object-based cues on eye movements is corroborated by a clear demonstration of observers' ability to extract object cues up to 16° into the visual periphery. The joint contributions of the cues to decision search accuracy approximates that expected from the combination of statistically independent cues and optimal cue combination. Finally, the lack of utilization and contribution of the background-based contextual cue to search guidance cannot be explained by the availability of the contextual cue in the visual periphery; instead it is related to background cues providing the least inherent information about the precise location of the target in the scene.

  16. Functional neural substrates of posterior cortical atrophy patients.

    PubMed

    Shames, H; Raz, N; Levin, Netta

    2015-07-01

    Posterior cortical atrophy (PCA) is a neurodegenerative syndrome in which the most pronounced pathologic involvement is in the occipito-parietal visual regions. Herein, we aimed to better define the cortical reflection of this unique syndrome using a thorough battery of behavioral and functional MRI (fMRI) tests. Eight PCA patients underwent extensive testing to map their visual deficits. Assessments included visual functions associated with lower and higher components of the cortical hierarchy, as well as dorsal- and ventral-related cortical functions. fMRI was performed on five patients to examine the neuronal substrate of their visual functions. The PCA patient cohort exhibited stereopsis, saccadic eye movements and higher dorsal stream-related functional impairments, including simultant perception, image orientation, figure-from-ground segregation, closure and spatial orientation. In accordance with the behavioral findings, fMRI revealed intact activation in the ventral visual regions of face and object perception while more dorsal aspects of perception, including motion and gestalt perception, revealed impaired patterns of activity. In most of the patients, there was a lack of activity in the word form area, which is known to be linked to reading disorders. Finally, there was evidence of reduced cortical representation of the peripheral visual field, corresponding to the behaviorally assessed peripheral visual deficit. The findings are discussed in the context of networks extending from parietal regions, which mediate navigationally related processing, visually guided actions, eye movement control and working memory, suggesting that damage to these networks might explain the wide range of deficits in PCA patients.

  17. Extensive video-game experience alters cortical networks for complex visuomotor transformations.

    PubMed

    Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E

    2010-10-01

    Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. Crown Copyright © 2009. Published by Elsevier Srl. All rights reserved.

  18. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: evidence from eye movements.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2012-02-01

    When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.

  19. A direct comparison of short-term audiomotor and visuomotor memory.

    PubMed

    Ward, Amanda M; Loucks, Torrey M; Ofori, Edward; Sosnoff, Jacob J

    2014-04-01

    Audiomotor and visuomotor short-term memory are required for an important variety of skilled movements but have not been compared in a direct manner previously. Audiomotor memory capacity might be greater to accommodate auditory goals that are less directly related to movement outcome than for visually guided tasks. Subjects produced continuous isometric force with the right index finger under auditory and visual feedback. During the first 10 s of each trial, subjects received continuous auditory or visual feedback. For the following 15 s, feedback was removed but the force had to be maintained accurately. An internal effort condition was included to test memory capacity in the same manner but without external feedback. Similar decay times of ~5-6 s were found for vision and audition but the decay time for internal effort was ~4 s. External feedback thus provides an advantage in maintaining a force level after feedback removal, but may not exclude some contribution from a sense of effort. Short-term memory capacity appears longer than certain previous reports but there may not be strong distinctions in capacity across different sensory modalities, at least for isometric force.

  20. Müller-Lyer figures influence the online reorganization of visually guided grasping movements.

    PubMed

    Heath, Matthew; Rival, Christina; Neely, Kristina; Krigolson, Olav

    2006-03-01

    In advance of grasping a visual object embedded within fins-in and fins-out Müller-Lyer (ML) configurations, participants formulated a premovement grip aperture (GA) based on the size of a neutral preview object. Preview objects were smaller, veridical, or larger than the size of the to-be-grasped target object. As a result, premovement GA associated with the small and large preview objects required significant online reorganization to appropriately grasp the target object. We reasoned that such a manipulation would provide an opportunity to examine the extent to which the visuomotor system engages egocentric and/or allocentric visual cues for the online, feedback-based control of action. It was found that the online reorganization of GA was reliably influenced by the ML figures (i.e., from 20 to 80% of movement time), regardless of the size of the preview object, albeit the small and large preview objects elicited more robust illusory effects than the veridical preview object. These results counter the view that online grasping control is mediated by absolute visual information computed with respect to the observer (e.g., Glover in Behav Brain Sci 27:3-78, 2004; Milner and Goodale in The visual brain in action 1995). Instead, the impact of the ML figures suggests a level of interaction between egocentric and allocentric visual cues in online action control.

  1. Monkey primary somatosensory cortical activity during the early reaction time period differs with cues that guide movements

    PubMed Central

    Liu, Yu; Denton, John M.; Nelson, Randall J.

    2009-01-01

    Vibration-related neurons in monkey primary somatosensory cortex (SI) discharge rhythmically when vibratory stimuli are presented. It remains unclear how functional information carried by vibratory inputs is coded in rhythmic neuronal activity. In the present study, we compared neuronal activity during wrist movements in response to two sets of cues. In the first, movements were guided by vibratory cue only (VIB trials). In the second, movements were guided by simultaneous presentation of both vibratory and visual cues (COM trials). SI neurons were recorded extracellularly during both wrist extensions and flexions. Neuronal activity during the instructed delay period (IDP) and the early reaction time period (RTP) were analyzed. A total of 96 cases from 48 neurons (each neuron contributed two cases, one each for extension and flexion) showed significant vibration entrainment during the early RTPs, as determined by circular statistics (Rayleigh test). Of these, 50 cases had cutaneous (CUTA) and 46 had deep (DEEP) receptive fields. The CUTA neurons showed lower firing rates during the IDPs and greater firing rate changes during the early RTPs when compared with the DEEP neurons. The CUTA neurons also demonstrated decreases in activity entrainment during VIB trials when compared with COM trials. For the DEEP neurons, the difference of entrainment between VIB and COM trials was not statistically significant. The results suggest that somatic vibratory input is coded by both the firing rate and the activity entrainment of the CUTA neurons in SI. The results also suggest that when vibratory inputs are required for successful task completion, the activity of the CUTA neurons increases but the entrainment degrades. The DEEP neurons may be tuned before movement initiation for processing information encoded by proprioceptive afferents. PMID:18288475

  2. Monkey primary somatosensory cortical activity during the early reaction time period differs with cues that guide movements.

    PubMed

    Liu, Yu; Denton, John M; Nelson, Randall J

    2008-05-01

    Vibration-related neurons in monkey primary somatosensory cortex (SI) discharge rhythmically when vibratory stimuli are presented. It remains unclear how functional information carried by vibratory inputs is coded in rhythmic neuronal activity. In the present study, we compared neuronal activity during wrist movements in response to two sets of cues. In the first, movements were guided by vibratory cue only (VIB trials). In the second, movements were guided by simultaneous presentation of both vibratory and visual cues (COM trials). SI neurons were recorded extracellularly during both wrist extensions and flexions. Neuronal activity during the instructed delay period (IDP) and the early reaction time period (RTP) were analyzed. A total of 96 cases from 48 neurons (each neuron contributed two cases, one each for extension and flexion) showed significant vibration entrainment during the early RTPs, as determined by circular statistics (Rayleigh test). Of these, 50 cases had cutaneous (CUTA) and 46 had deep (DEEP) receptive fields. The CUTA neurons showed lower firing rates during the IDPs and greater firing rate changes during the early RTPs when compared with the DEEP neurons. The CUTA neurons also demonstrated decreases in activity entrainment during VIB trials when compared with COM trials. For the DEEP neurons, the difference of entrainment between VIB and COM trials was not statistically significant. The results suggest that somatic vibratory input is coded by both the firing rate and the activity entrainment of the CUTA neurons in SI. The results also suggest that when vibratory inputs are required for successful task completion, the activity of the CUTA neurons increases but the entrainment degrades. The DEEP neurons may be tuned before movement initiation for processing information encoded by proprioceptive afferents.

  3. The effects of task difficulty on visual search strategy in virtual 3D displays

    PubMed Central

    Pomplun, Marc; Garaas, Tyler W.; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an “easy” conjunction search task and a “difficult” shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x−y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the “easy” task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the “difficult” task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  4. Eye movements reveal epistemic curiosity in human observers.

    PubMed

    Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline

    2015-12-01

    Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion

    NASA Technical Reports Server (NTRS)

    Zivotofsky, A. Z.; Rottach, K. G.; Averbuch-Heller, L.; Kori, A. A.; Thomas, C. W.; Dell'Osso, L. F.; Leigh, R. J.

    1996-01-01

    1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the "memory period" (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the "spatial error" than with the "retinal error"; this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ.

  6. Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information.

    PubMed

    Strauss, Soeren; Woodgate, Philip J W; Sami, Saber A; Heinke, Dietmar

    2015-12-01

    We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain's attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO's predictions and also lessons for neurobiologically inspired robotics emerging from this work. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  7. Effects of aging on pointing movements under restricted visual feedback conditions.

    PubMed

    Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong

    2015-04-01

    The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Perceptual integration of motion and form information: evidence of parallel-continuous processing.

    PubMed

    von Mühlenen, A; Müller, H J

    2000-04-01

    In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).

  9. Visuomotor sensitivity to visual information about surface orientation.

    PubMed

    Knill, David C; Kersten, Daniel

    2004-03-01

    We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.

  10. Effects of Anisometropic Amblyopia on Visuomotor Behavior, Part 2: Visually Guided Reaching

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Hirji, Zahra; Crawford, J. Douglas; Wong, Agnes M. F.

    2016-01-01

    Purpose The effects of impaired spatiotemporal vision in amblyopia on visuomotor skills have rarely been explored in detail. The goal of this study was to examine the influences of amblyopia on visually guided reaching. Methods Fourteen patients with anisometropic amblyopia and 14 control subjects were recruited. Participants executed reach-to-touch movements toward targets presented randomly 5° or 10° to the left or right of central fixation in three viewing conditions: binocular, monocular amblyopic eye, and monocular fellow eye viewing (left and right monocular viewing for control subjects). Visual feedback of the target was removed on 50% of the trials at the initiation of reaching. Results Reaching accuracy was comparable between patients and control subjects during all three viewing conditions. Patients’ reaching responses were slightly less precise during amblyopic eye viewing, but their precision was normal during binocular or fellow eye viewing. Reaching reaction time was not affected by amblyopia. The duration of the acceleration phase was longer in patients than in control subjects under all viewing conditions, whereas the duration of the deceleration phase was unaffected. Peak acceleration and peak velocity were also reduced in patients. Conclusions Amblyopia affects both the programming and the execution of visually guided reaching. The increased duration of the acceleration phase, as well as the reduced peak acceleration and peak velocity, might reflect a strategy or adaptation of feedforward/feedback control of the visuomotor system to compensate for degraded spatiotemporal vision in amblyopia, allowing patients to optimize their reaching performance. PMID:21051723

  11. The Dorsal Visual System Predicts Future and Remembers Past Eye Position

    PubMed Central

    Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart

    2016-01-01

    Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617

  12. No Evidence for a Saccadic Range Effect for Visually Guided and Memory-Guided Saccades in Simple Saccade-Targeting Tasks

    PubMed Central

    Vitu, Françoise; Engbert, Ralf; Kliegl, Reinhold

    2016-01-01

    Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. PMID:27658191

  13. Move faster, think later: Women who play action video games have quicker visually-guided responses with later onset visuomotor-related brain activity.

    PubMed

    Gorbet, Diana J; Sergio, Lauren E

    2018-01-01

    A history of action video game (AVG) playing is associated with improvements in several visuospatial and attention-related skills and these improvements may be transferable to unrelated tasks. These facts make video games a potential medium for skill-training and rehabilitation. However, examinations of the neural correlates underlying these observations are almost non-existent in the visuomotor system. Further, the vast majority of studies on the effects of a history of AVG play have been done using almost exclusively male participants. Therefore, to begin to fill these gaps in the literature, we present findings from two experiments. In the first, we use functional MRI to examine brain activity in experienced, female AVG players during visually-guided reaching. In the second, we examine the kinematics of visually-guided reaching in this population. Imaging data demonstrate that relative to women who do not play, AVG players have less motor-related preparatory activity in the cuneus, middle occipital gyrus, and cerebellum. This decrease is correlated with estimates of time spent playing. Further, these correlations are strongest during the performance of a visuomotor mapping that spatially dissociates eye and arm movements. However, further examinations of the full time-course of visuomotor-related activity in the AVG players revealed that the decreased activity during motor preparation likely results from a later onset of activity in AVG players, which occurs closer to beginning motor execution relative to the non-playing group. Further, the data presented here suggest that this later onset of preparatory activity represents greater neural efficiency that is associated with faster visually-guided responses.

  14. Move faster, think later: Women who play action video games have quicker visually-guided responses with later onset visuomotor-related brain activity

    PubMed Central

    Gorbet, Diana J.; Sergio, Lauren E.

    2018-01-01

    A history of action video game (AVG) playing is associated with improvements in several visuospatial and attention-related skills and these improvements may be transferable to unrelated tasks. These facts make video games a potential medium for skill-training and rehabilitation. However, examinations of the neural correlates underlying these observations are almost non-existent in the visuomotor system. Further, the vast majority of studies on the effects of a history of AVG play have been done using almost exclusively male participants. Therefore, to begin to fill these gaps in the literature, we present findings from two experiments. In the first, we use functional MRI to examine brain activity in experienced, female AVG players during visually-guided reaching. In the second, we examine the kinematics of visually-guided reaching in this population. Imaging data demonstrate that relative to women who do not play, AVG players have less motor-related preparatory activity in the cuneus, middle occipital gyrus, and cerebellum. This decrease is correlated with estimates of time spent playing. Further, these correlations are strongest during the performance of a visuomotor mapping that spatially dissociates eye and arm movements. However, further examinations of the full time-course of visuomotor-related activity in the AVG players revealed that the decreased activity during motor preparation likely results from a later onset of activity in AVG players, which occurs closer to beginning motor execution relative to the non-playing group. Further, the data presented here suggest that this later onset of preparatory activity represents greater neural efficiency that is associated with faster visually-guided responses. PMID:29364891

  15. Computational motor control: feedback and accuracy.

    PubMed

    Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel

    2008-02-01

    Speed/accuracy trade-off is a ubiquitous phenomenon in motor behaviour, which has been ascribed to the presence of signal-dependent noise (SDN) in motor commands. Although this explanation can provide a quantitative account of many aspects of motor variability, including Fitts' law, the fact that this law is frequently violated, e.g. during the acquisition of new motor skills, remains unexplained. Here, we describe a principled approach to the influence of noise on motor behaviour, in which motor variability results from the interplay between sensory and motor execution noises in an optimal feedback-controlled system. In this framework, we first show that Fitts' law arises due to signal-dependent motor noise (SDN(m)) when sensory (proprioceptive) noise is low, e.g. under visual feedback. Then we show that the terminal variability of non-visually guided movement can be explained by the presence of signal-dependent proprioceptive noise. Finally, we show that movement accuracy can be controlled by opposite changes in signal-dependent sensory (SDN(s)) and SDN(m), a phenomenon that could be ascribed to muscular co-contraction. As the model also explains kinematics, kinetics, muscular and neural characteristics of reaching movements, it provides a unified framework to address motor variability.

  16. Independent digit movements and precision grip patterns in 1-5-month-old human infants: hand-babbling, including vacuous then self-directed hand and digit movements, precedes targeted reaching.

    PubMed

    Wallace, Patricia S; Whishaw, Ian Q

    2003-01-01

    Previous work has described human reflexive grasp patterns in early infancy and visually guided reaching and grasping in late infancy. There has been no examination of hand movements in the intervening period. This was the purpose of the present study. We video recorded the spontaneous hand and digit movements made by alert infants over their first 5 months of age. Over this period, spontaneous hand and digit movements developed from fists to almost continuous, vacuous movements and then to self-directed grasping movements. Amongst the many hand and digit movements observed, four grasping patterns emerged during this period: fists, pre-precision grips associated with numerous digit postures, precision grips including the pincer grasp, and self-directed grasps. The finding that a wide range of independent digit movements and grasp patterns are displayed spontaneously by infants within their first 5 months of age is discussed in relation to the development of the motor system, including the suggestion that direct connections of the pyramidal tract are functional relatively early in infancy. It is also suggested that hand babbling, consisting of first vacuous and then self-directed movements, is preparatory to targeted reaching.

  17. How (and why) the visual control of action differs from visual perception

    PubMed Central

    Goodale, Melvyn A.

    2014-01-01

    Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899

  18. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings

    PubMed Central

    Dundon, Neil M.; Bertini, Caterina; Làdavas, Elisabetta; Sabel, Bernhard A.; Gall, Carolin

    2015-01-01

    Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: “compensation” and “restoration”. Existing methods can be classified into three groups: Visual Scanning Training (VST), Audio-Visual Scanning Training (AViST) and Vision Restoration Training (VRT). VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local “within-system” interactions (i.e., relying on plasticity within peri-lesional spared tissue) and changes in more global “between-system” networks (i.e., recruiting alternative visual pathways) contribute to both vision restoration and compensatory rehabilitation, which ultimately have implications for the rehabilitation of cognitive functions. PMID:26283935

  1. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  2. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-16

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

  3. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-01

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347

  4. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306

  5. Attention, Intention, and Priority in the Parietal Lobe

    PubMed Central

    Bisley, James W.; Goldberg, Michael E.

    2013-01-01

    For many years there has been a debate about the role of the parietal lobe in the generation of behavior. Does it generate movement plans (intention) or choose objects in the environment for further processing? To answer this, we focus on the lateral intraparietal area (LIP), an area that has been shown to play independent roles in target selection for saccades and the generation of visual attention. Based on results from a variety of tasks, we propose that LIP acts as a priority map in which objects are represented by activity proportional to their behavioral priority. We present evidence to show that the priority map combines bottom-up inputs like a rapid visual response with an array of top-down signals like a saccade plan. The spatial location representing the peak of the map is used by the oculomotor system to target saccades and by the visual system to guide visual attention. PMID:20192813

  6. Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.

    PubMed

    Andrews, T J; Coppola, D M

    1999-08-01

    Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.

  7. A kinematic analysis of visually-guided movement in Williams syndrome.

    PubMed

    Hocking, Darren R; Rinehart, Nicole J; McGinley, Jennifer L; Moss, Simon A; Bradshaw, John L

    2011-02-15

    Previous studies have reported that people with the neurodevelopmental disorder Williams syndrome exhibit difficulties with visuomotor control. In the current study, we examined the extent to which visuomotor deficits were associated with movement planning or feedback-based on-line control. We used a variant of the Fitts' reciprocal aiming task on a computerized touchscreen in adults with WS, IQ-matched individuals with Down syndrome (DS), and typically developing controls. By manipulating task difficulty both as a function of target size and amplitude, we were able to vary the requirements for accuracy to examine processes associated with dorsal visual stream and cerebellar functioning. Although a greater increase in movement time as a function of task difficulty was observed in the two clinical groups with WS and DS, greater magnitude in the late kinematic components of movement-specifically, time after peak velocity-was revealed in the WS group during increased demands for accuracy. In contrast, the DS group showed a greater speed-accuracy trade-off with significantly reduced and more variable endpoint accuracy, which may be associated with cerebellar deficits. In addition, the WS group spent more time stationary in the target when task-related features reflected a higher level of difficulty, suggestive of specific deficits in movement planning. Our results indicate that the visuomotor coordination deficits in WS may reflect known impairments of the dorsal stream, but may also indicate a role for the cerebellum in dynamic feed-forward motor control. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    PubMed

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  9. The influence of stimulus format on drawing—a functional imaging study of decision making in portrait drawing

    PubMed Central

    Miall, R.C.; Nam, Se-Ho; Tchalenko, J.

    2014-01-01

    To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye–hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. PMID:25128710

  10. Impaired Oculomotor Behavior of Children with Developmental Dyslexia in Antisaccades and Predictive Saccades Tasks

    PubMed Central

    Lukasova, Katerina; Silva, Isadora P.; Macedo, Elizeu C.

    2016-01-01

    Analysis of eye movement patterns during tracking tasks represents a potential way to identify differences in the cognitive processing and motor mechanisms underlying reading in dyslexic children before the occurrence of school failure. The current study aimed to evaluate the pattern of eye movements in antisaccades, predictive saccades and visually guided saccades in typical readers and readers with developmental dyslexia. The study included 30 children (age M = 11; SD = 1.67), 15 diagnosed with developmental dyslexia (DG) and 15 regular readers (CG), matched by age, gender and school grade. Cognitive assessment was performed prior to the eye-tracking task during which both eyes were registered using the Tobii® 1750 eye-tracking device. The results demonstrated a lower correct antisaccades rate in dyslexic children compared to the controls (p < 0.001, DG = 25%, CC = 37%). Dyslexic children also made fewer saccades in predictive latency (p < 0.001, DG = 34%, CG = 46%, predictive latency within −300–120 ms with target as 0 point). No between-group difference was found for visually guided saccades. In this task, both groups showed shorter latency for right-side targets. The results indicated altered oculomotor behavior in dyslexic children, which has been reported in previous studies. We extend these findings by demonstrating impaired implicit learning of target's time/position patterns in dyslexic children. PMID:27445945

  11. Unfolding Visual Lexical Decision in Time

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  12. Incidental learning speeds visual search by lowering response thresholds, not by improving efficiency: Evidence from eye movements

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2011-01-01

    When observers search for a target object, they incidentally learn the identities and locations of “background” objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays (Hout & Goldinger, 2010). Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory; Wolfe, 2007). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated non-target objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent. PMID:21574743

  13. Anisotropy of Human Horizontal and Vertical Navigation in Real Space: Behavioral and PET Correlates.

    PubMed

    Zwergal, Andreas; Schöberl, Florian; Xiong, Guoming; Pradhan, Cauchy; Covic, Aleksandar; Werner, Philipp; Trapp, Christoph; Bartenstein, Peter; la Fougère, Christian; Jahn, Klaus; Dieterich, Marianne; Brandt, Thomas

    2016-10-17

    Spatial orientation was tested during a horizontal and vertical real navigation task in humans. Video tracking of eye movements was used to analyse the behavioral strategy and combined with simultaneous measurements of brain activation and metabolism ([18F]-FDG-PET). Spatial navigation performance was significantly better during horizontal navigation. Horizontal navigation was predominantly visually and landmark-guided. PET measurements indicated that glucose metabolism increased in the right hippocampus, bilateral retrosplenial cortex, and pontine tegmentum during horizontal navigation. In contrast, vertical navigation was less reliant on visual and landmark information. In PET, vertical navigation activated the bilateral hippocampus and insula. Direct comparison revealed a relative activation in the pontine tegmentum and visual cortical areas during horizontal navigation and in the flocculus, insula, and anterior cingulate cortex during vertical navigation. In conclusion, these data indicate a functional anisotropy of human 3D-navigation in favor of the horizontal plane. There are common brain areas for both forms of navigation (hippocampus) as well as unique areas such as the retrosplenial cortex, visual cortex (horizontal navigation), flocculus, and vestibular multisensory cortex (vertical navigation). Visually guided landmark recognition seems to be more important for horizontal navigation, while distance estimation based on vestibular input might be more relevant for vertical navigation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Eye movement-invariant representations in the human visual system.

    PubMed

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  15. Online Control of Prehension Predicts Performance on a Standardized Motor Assessment Test in 8- to 12-Year-Old Children

    PubMed Central

    Blanchard, Caroline C. V.; McGlashan, Hannah L.; French, Blandine; Sperring, Rachel J.; Petrocochino, Bianca; Holmes, Nicholas P.

    2017-01-01

    Goal-directed hand movements are guided by sensory information and may be adjusted ‘online,’ during the movement. If the target of a movement unexpectedly changes position, trajectory corrections can be initiated in as little as 100 ms in adults. This rapid visual online control is impaired in children with developmental coordination disorder (DCD), and potentially in other neurodevelopmental conditions. We investigated the visual control of hand movements in children in a ‘center-out’ double-step reaching and grasping task, and examined how parameters of this visuomotor control co-vary with performance on standardized motor tests often used with typically and atypically developing children. Two groups of children aged 8–12 years were asked to reach and grasp an illuminated central ball on a vertically oriented board. On a proportion of trials, and at movement onset, the illumination switched unpredictably to one of four other balls in a center-out configuration (left, right, up, or down). When the target moved, all but one of the children were able to correct their movements before reaching the initial target, at least on some trials, but the latencies to initiate these corrections were longer than those typically reported in the adult literature, ranging from 211 to 581 ms. These later corrections may be due to less developed motor skills in children, or to the increased cognitive and biomechanical complexity of switching movements in four directions. In the first group (n = 187), reaching and grasping parameters significantly predicted standardized movement scores on the MABC-2, most strongly for the aiming and catching component. In the second group (n = 85), these same parameters did not significantly predict scores on the DCDQ′07 parent questionnaire. Our reaching and grasping task provides a sensitive and continuous measure of movement skill that predicts scores on standardized movement tasks used to screen for DCD. PMID:28360874

  16. Agreement Between Visual Assessment and 2-Dimensional Analysis During Jump Landing Among Healthy Female Athletes.

    PubMed

    Rabin, Alon; Einstein, Ofira; Kozol, Zvi

    2018-04-01

      Altered movement patterns, including increased frontal-plane knee movement and decreased sagittal-plane hip and knee movement, have been associated with several knee disorders. Nevertheless, the ability of clinicians to visually detect such altered movement patterns during high-speed athletic tasks is relatively unknown.   To explore the association between visual assessment and 2-dimensional (2D) analysis of frontal-plane knee movement and sagittal-plane hip and knee movement during a jump-landing task among healthy female athletes.   Cross-sectional study.   Gymnasiums of participating volleyball teams.   A total of 39 healthy female volleyball players (age = 21.0 ± 5.2 years, height = 172.0 ± 8.6 cm, mass = 64.2 ± 7.2 kg) from Divisions I and II of the Israeli Volleyball Association.   Frontal-plane knee movement and sagittal-plane hip and knee movement during jump landing were visually rated as good, moderate, or poor based on previously established criteria. Frontal-plane knee excursion and sagittal-plane hip and knee excursions were measured using free motion-analysis software and compared among athletes with different visual ratings of the corresponding movements.   Participants with different visual ratings of frontal-plane knee movement displayed differences in 2D frontal-plane knee excursion ( P < .01), whereas participants with different visual ratings of sagittal-plane hip and knee movement displayed differences in 2D sagittal-plane hip and knee excursions ( P < .01).   Visual ratings of frontal-plane knee movement and sagittal-plane hip and knee movement were associated with differences in the corresponding 2D hip and knee excursions. Visual rating of these movements may serve as an initial screening tool for detecting altered movement patterns during jump landings.

  17. Action generation and action perception in imitation: an instance of the ideomotor principle.

    PubMed Central

    Wohlschläger, Andreas; Gattis, Merideth; Bekkering, Harold

    2003-01-01

    We review a series of behavioural experiments on imitation in children and adults that test the predictions of a new theory of imitation. Most of the recent theories of imitation assume a direct visual-to-motor mapping between perceived and imitated movements. Based on our findings of systematic errors in imitation, the new theory of goal-directed imitation (GOADI) instead assumes that imitation is guided by cognitively specified goals. According to GOADI, the imitator does not imitate the observed movement as a whole, but rather decomposes it into its separate aspects. These aspects are hierarchically ordered, and the highest aspect becomes the imitator's main goal. Other aspects become sub-goals. In accordance with the ideomotor principle, the main goal activates the motor programme that is most strongly associated with the achievement of that goal. When executed, this motor programme sometimes matches, and sometimes does not, the model's movement. However, the main goal extracted from the model movement is almost always imitated correctly. PMID:12689376

  18. A model of attention-guided visual perception and recognition.

    PubMed

    Rybak, I A; Gusakova, V I; Golovan, A V; Podladchikova, L N; Shevtsova, N A

    1998-08-01

    A model of visual perception and recognition is described. The model contains: (i) a low-level subsystem which performs both a fovea-like transformation and detection of primary features (edges), and (ii) a high-level subsystem which includes separated 'what' (sensory memory) and 'where' (motor memory) structures. Image recognition occurs during the execution of a 'behavioral recognition program' formed during the primary viewing of the image. The recognition program contains both programmed attention window movements (stored in the motor memory) and predicted image fragments (stored in the sensory memory) for each consecutive fixation. The model shows the ability to recognize complex images (e.g. faces) invariantly with respect to shift, rotation and scale.

  19. Benefits of Motion in Animated Storybooks for Children’s Visual Attention and Story Comprehension. An Eye-Tracking Study

    PubMed Central

    Takacs, Zsofia K.; Bus, Adriana G.

    2016-01-01

    The present study provides experimental evidence regarding 4–6-year-old children’s visual processing of animated versus static illustrations in storybooks. Thirty nine participants listened to an animated and a static book, both three times, while eye movements were registered with an eye-tracker. Outcomes corroborate the hypothesis that specifically motion is what attracts children’s attention while looking at illustrations. It is proposed that animated illustrations that are well matched to the text of the story guide children to those parts of the illustration that are important for understanding the story. This may explain why animated books resulted in better comprehension than static books. PMID:27790183

  20. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    PubMed

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  1. Effects of directional uncertainty on visually-guided joystick pointing.

    PubMed

    Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C

    2005-02-01

    Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.

  2. Gesture helps learners learn, but not merely by guiding their visual attention.

    PubMed

    Wakefield, Elizabeth; Novack, Miriam A; Congdon, Eliza L; Franconeri, Steven; Goldin-Meadow, Susan

    2018-04-16

    Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. © 2018 John Wiley & Sons Ltd.

  3. Prediction of Imagined Single-Joint Movements in a Person with High Level Tetraplegia

    PubMed Central

    Simeral, John D.; Donoghue, John P.; Hochberg, Leigh R.; Kirsch, Robert F.

    2013-01-01

    Cortical neuroprostheses for movement restoration require developing models for relating neural activity to desired movement. Previous studies have focused on correlating single-unit activities (SUA) in primary motor cortex to volitional arm movements in able-bodied primates. The extent of the cortical information relevant to arm movements remaining in severely paralyzed individuals is largely unknown. We record intracortical signals using a microelectrode array chronically implanted in the precentral gyrus of a person with tetraplegia, and estimate positions of imagined single-joint arm movements. Using visually guided motor imagery, the participant imagined performing eight distinct single-joint arm movements while SUA, multi-spike trains (MSP), multi-unit activity (MUA), and local field potential time (LFPrms) and frequency signals (LFPstft) were recorded. Using linear system identification, imagined joint trajectories were estimated with 20 – 60% variance explained, with wrist flexion/extension predicted the best and pronation/supination the poorest. Statistically, decoding of MSP and LFPstft yielded estimates that equaled those of SUA. Including multiple signal types in a decoder increased prediction accuracy in all cases. We conclude that signals recorded from a single restricted region of the precentral gyrus in this person with tetraplegia contained useful information regarding the intended movements of upper extremity joints. PMID:22851229

  4. Foot placement relies on state estimation during visually guided walking.

    PubMed

    Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S

    2017-02-01

    As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.

  5. Eye movements and attention in reading, scene perception, and visual search.

    PubMed

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  6. Investigations of the pathogenesis of acquired pendular nystagmus

    NASA Technical Reports Server (NTRS)

    Averbuch-Heller, L.; Zivotofsky, A. Z.; Das, V. E.; DiScenna, A. O.; Leigh, R. J.

    1995-01-01

    We investigated the pathogenesis of acquired pendular nystagmus (APN) in six patients, three of whom had multiple sclerosis. First, we tested the hypothesis that the oscillations of APN are due to a delay in visual feedback secondary, for example, to demyelination of the optic nerves. We manipulated the latency to onset of visually guided eye movements using an electronic technique that induces sinusoidal oscillations in normal subjects. This manipulation did not change the characteristics of the APN, but did superimpose lower-frequency oscillations similar to those induced in normal subjects. These results are consistent with current models for smooth (non-saccadic) eye movements, which predict that prolongation of visual feedback could not account for the high-frequency oscillations that often characterize APN. Secondly, we attempted to determine whether an increase in the gain of the visually-enhanced vestibulo-ocular reflex (VOR), produced by viewing a near target, was accompanied by a commensurate increase in the amplitude of APN. Increases in horizontal or vertical VOR gain during near viewing occurred in four patients, but only two of them showed a parallel increase in APN amplitude. On the other hand, APN amplitude decreased during viewing of the near target in the two patients who showed no change in VOR gain. Taken together, these data suggest that neither delayed visual feedback nor a disorder of central vestibular mechanisms is primarily responsible for APN. More likely, these ocular oscillations are produced by abnormalities of internal feedback circuits, such as the reciprocal connections between brainstem nuclei and cerebellum.

  7. Modulation of neuronal responses during covert search for visual feature conjunctions

    PubMed Central

    Buracas, Giedrius T.; Albright, Thomas D.

    2009-01-01

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385

  8. Modulation of neuronal responses during covert search for visual feature conjunctions.

    PubMed

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  9. A Comparative Analysis of Speed Profile Models for Ankle Pointing Movements: Evidence that Lower and Upper Extremity Discrete Movements are Controlled by a Single Invariant Strategy

    PubMed Central

    Michmizos, Konstantinos P.; Vaisman, Lev; Krebs, Hermano Igo

    2014-01-01

    Little is known about whether our knowledge of how the central nervous system controls the upper extremities (UE), can generalize, and to what extent to the lower limbs. Our continuous efforts to design the ideal adaptive robotic therapy for the lower limbs of stroke patients and children with cerebral palsy highlighted the importance of analyzing and modeling the kinematics of the lower limbs, in general, and those of the ankle joints, in particular. We recruited 15 young healthy adults that performed in total 1,386 visually evoked, visually guided, and target-directed discrete pointing movements with their ankle in dorsal–plantar and inversion–eversion directions. Using a non-linear, least-squares error-minimization procedure, we estimated the parameters for 19 models, which were initially designed to capture the dynamics of upper limb movements of various complexity. We validated our models based on their ability to reconstruct the experimental data. Our results suggest a remarkable similarity between the top-performing models that described the speed profiles of ankle pointing movements and the ones previously found for the UE both during arm reaching and wrist pointing movements. Among the top performers were the support-bounded lognormal and the beta models that have a neurophysiological basis and have been successfully used in upper extremity studies with normal subjects and patients. Our findings suggest that the same model can be applied to different “human” hardware, perhaps revealing a key invariant in human motor control. These findings have a great potential to enhance our rehabilitation efforts in any population with lower extremity deficits by, for example, assessing the level of motor impairment and improvement as well as informing the design of control algorithms for therapeutic ankle robots. PMID:25505881

  10. Location memory biases reveal the challenges of coordinating visual and kinesthetic reference frames

    PubMed Central

    Simmering, Vanessa R.; Peterson, Clayton; Darling, Warren; Spencer, John P.

    2008-01-01

    Five experiments explored the influence of visual and kinesthetic/proprioceptive reference frames on location memory. Experiments 1 and 2 compared visual and kinesthetic reference frames in a memory task using visually-specified locations and a visually-guided response. When the environment was visible, results replicated previous findings of biases away from the midline symmetry axis of the task space, with stability for targets aligned with this axis. When the environment was not visible, results showed some evidence of bias away from a kinesthetically-specified midline (trunk anterior–posterior [a–p] axis), but there was little evidence of stability when targets were aligned with body midline. This lack of stability may reflect the challenges of coordinating visual and kinesthetic information in the absence of an environmental reference frame. Thus, Experiments 3–5 examined kinesthetic guidance of hand movement to kinesthetically-defined targets. Performance in these experiments was generally accurate with no evidence of consistent biases away from the trunk a–p axis. We discuss these results in the context of the challenges of coordinating reference frames within versus between multiple sensori-motor systems. PMID:17703284

  11. Visuomotor control, eye movements, and steering: A unified approach for incorporating feedback, feedforward, and internal models.

    PubMed

    Lappi, Otto; Mole, Callum

    2018-06-11

    The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, on driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a-c) has been considered before (also in the context of driving), integrating them into a single framework and the authors' multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Evaluating the influence of motor control on selective attention through a stochastic model: the paradigm of motor control dysfunction in cerebellar patient.

    PubMed

    Veneri, Giacomo; Federico, Antonio; Rufa, Alessandra

    2014-01-01

    Attention allows us to selectively process the vast amount of information with which we are confronted, prioritizing some aspects of information and ignoring others by focusing on a certain location or aspect of the visual scene. Selective attention is guided by two cognitive mechanisms: saliency of the image (bottom up) and endogenous mechanisms (top down). These two mechanisms interact to direct attention and plan eye movements; then, the movement profile is sent to the motor system, which must constantly update the command needed to produce the desired eye movement. A new approach is described here to study how the eye motor control could influence this selection mechanism in clinical behavior: two groups of patients (SCA2 and late onset cerebellar ataxia LOCA) with well-known problems of motor control were studied; patients performed a cognitively demanding task; the results were compared to a stochastic model based on Monte Carlo simulations and a group of healthy subjects. The analytical procedure evaluated some energy functions for understanding the process. The implemented model suggested that patients performed an optimal visual search, reducing intrinsic noise sources. Our findings theorize a strict correlation between the "optimal motor system" and the "optimal stimulus encoders."

  13. Using Multiple Ways to Investigate Cognitive Load Theory in the Context of Physics Instruction

    NASA Astrophysics Data System (ADS)

    Zu, Tianlong

    Cognitive load theory (CLT) (Sweller 1988, 1998, 2010) provides us a guiding framework for designing instructional materials. CLT differentiates three subtypes of cognitive load: intrinsic, extraneous, and germane cognitive load. The three cognitive loads are theorized based on the number of simultaneously processed elements in working memory. Intrinsic cognitive load depends upon the number of interacting elements in the instructional material that are related to the learning objective. Extraneous cognitive load is the mental resources allocated to processing unnecessary information which does not contribute to learning as caused by non- optimal instructional procedure. It is determined by the number of interacting elements which are not related to learning goal. Both intrinsic and extraneous load vary according to prior knowledge of learners. Germane cognitive load is indirectly related to interacting elements. It represents the cognitive resources deployed for processing intrinsic load, chunking information and constructing and automating schema. Germane cognitive load is related to level of motivation of the learner. Given this triarchic model of cognitive load and their different roles in learning activities, different learning outcomes can be expected depending upon the characteristics of the educational materials, learner characteristics, and instructional setting. In three experiments, we investigated cognitive load theory following different approaches. Given the triarchic nature of cognitive load construct, it is critical to find non- intrusive ways to measure cognitive load. In study one, we replicated and extended a previous landmark study to investigate the use of eye movements related metrics to measure the three kinds of cognitive load independently. We also collected working memory capacity of students using a cognitive operation-span task. Two of the three types of cognitive load (intrinsic and extraneous) were directly manipulated, and the third type of cognitive load (germane) was indirectly ascertained. We found that different eye-movement based parameters were most sensitive to different types of cognitive load. These results indicate that it is possible to monitor the three kinds of cognitive load separately using eye movement parameters. We also compared the up-to-date cognitive load theory model with an alternative model using a multi-level model analysis and we found that Sweller's (2010) up-to-date model is supported by our data. In educational settings, active learning based methodologies such as peer instruction have been shown to be effective in facilitating students' conceptual understanding. In study two, we discussed the effect of peer interaction on conceptual test performance of students from a cognitive load perspective. Based on the literature, a self-reported cognitive load survey was developed to measure each type of cognitive load. We found that a certain level of prior knowledge is necessary for peer interaction to work and that peer interaction is effective mainly through significantly decreasing the intrinsic load experienced by students, even though it may increase the extraneous load. In study three, we compared the effect of guided instruction in the form of worked examples using narrated-animated video solutions and semi-guided instruction using visual cues on students' performance, shift of visual attention during transfer, and extraneous cognitive load during learning. We found that multimedia video solutions can be more effective in promoting transfer performance of learners than visual cues. We also found evidence that guided instruction in the form of multimedia video solutions can decrease extraneous cognitive load of students during learning, more so than semi-guided instruction using visual cues.

  14. Visual Impairment Screening Assessment (VISA) tool: pilot validation.

    PubMed

    Rowe, Fiona J; Hepworth, Lauren R; Hanna, Kerry L; Howard, Claire

    2018-03-06

    To report and evaluate a new Vision Impairment Screening Assessment (VISA) tool intended for use by the stroke team to improve identification of visual impairment in stroke survivors. Prospective case cohort comparative study. Stroke units at two secondary care hospitals and one tertiary centre. 116 stroke survivors were screened, 62 by naïve and 54 by non-naïve screeners. Both the VISA screening tool and the comprehensive specialist vision assessment measured case history, visual acuity, eye alignment, eye movements, visual field and visual inattention. Full completion of VISA tool and specialist vision assessment was achieved for 89 stroke survivors. Missing data for one or more sections typically related to patient's inability to complete the assessment. Sensitivity and specificity of the VISA screening tool were 90.24% and 85.29%, respectively; the positive and negative predictive values were 93.67% and 78.36%, respectively. Overall agreement was significant; k=0.736. Lowest agreement was found for screening of eye movement and visual inattention deficits. This early validation of the VISA screening tool shows promise in improving detection accuracy for clinicians involved in stroke care who are not specialists in vision problems and lack formal eye training, with potential to lead to more prompt referral with fewer false positives and negatives. Pilot validation indicates acceptability of the VISA tool for screening of visual impairment in stroke survivors. Sensitivity and specificity were high indicating the potential accuracy of the VISA tool for screening purposes. Results of this study have guided the revision of the VISA screening tool ahead of full clinical validation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Congruent representation of visual and acoustic space in the superior colliculus of the echolocating bat Phyllostomus discolor.

    PubMed

    Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald

    2016-11-01

    The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Looking around: 35 years of oculomotor modeling

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1995-01-01

    Eye movements have attracted an unusually large number of researchers from many disparate fields, especially over the past 35 years. The lure of this system stemmed from its apparent simplicity of description, measurement, and analysis, as well as the promise of providing a "window in the mind." Investigators in areas ranging from biological control systems and neurological diagnosis to applications in advertising and flight simulation expected eye movements to provide clear indicators of what the sensory-motor system was accomplishing and what the brain found to be of interest. The parallels between compensatory eye movements and perception of spatial orientation have been a subject for active study in visual-vestibular interaction, where substantial knowledge has accumulated through experiments largely guided by the challenge of proving or disproving model predictions. Even though oculomotor control has arguably benefited more from systems theory than any other branch of motor control, many of the original goals remain largely unfulfilled. This paper considers some of the promising potential benefits of eye movement research and compares accomplishments with anticipated results. Four topics are considered in greater detail: (i) the definition of oculomotor system input and output, (ii) optimization of the eye movement system, (iii) the relationship between compensatory eye movements and spatial orientation through the "internal model," and (iv) the significance of eye movements as measured in (outer) space.

  17. Looking around: 35 years of oculomotor modeling.

    PubMed

    Young, L R

    1995-01-01

    Eye movements have attracted an unusually large number of researchers from many disparate fields, especially over the past 35 years. The lure of this system stemmed from its apparent simplicity of description, measurement, and analysis, as well as the promise of providing a "window in the mind." Investigators in areas ranging from biological control systems and neurological diagnosis to applications in advertising and flight simulation expected eye movements to provide clear indicators of what the sensory-motor system was accomplishing and what the brain found to be of interest. The parallels between compensatory eye movements and perception of spatial orientation have been a subject for active study in visual-vestibular interaction, where substantial knowledge has accumulated through experiments largely guided by the challenge of proving or disproving model predictions. Even though oculomotor control has arguably benefited more from systems theory than any other branch of motor control, many of the original goals remain largely unfulfilled. This paper considers some of the promising potential benefits of eye movement research and compares accomplishments with anticipated results. Four topics are considered in greater detail: (i) the definition of oculomotor system input and output, (ii) optimization of the eye movement system, (iii) the relationship between compensatory eye movements and spatial orientation through the "internal model," and (iv) the significance of eye movements as measured in (outer) space.

  18. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  19. The influence of stimulus format on drawing--a functional imaging study of decision making in portrait drawing.

    PubMed

    Miall, R C; Nam, Se-Ho; Tchalenko, J

    2014-11-15

    To copy a natural visual image as a line drawing, visual identification and extraction of features in the image must be guided by top-down decisions, and is usually influenced by prior knowledge. In parallel with other behavioral studies testing the relationship between eye and hand movements when drawing, we report here a functional brain imaging study in which we compared drawing of faces and abstract objects: the former can be strongly guided by prior knowledge, the latter less so. To manipulate the difficulty in extracting features to be drawn, each original image was presented in four formats including high contrast line drawings and silhouettes, and as high and low contrast photographic images. We confirmed the detailed eye-hand interaction measures reported in our other behavioral studies by using in-scanner eye-tracking and recording of pen movements with a touch screen. We also show that the brain activation pattern reflects the changes in presentation formats. In particular, by identifying the ventral and lateral occipital areas that were more highly activated during drawing of faces than abstract objects, we found a systematic increase in differential activation for the face-drawing condition, as the presentation format made the decisions more challenging. This study therefore supports theoretical models of how prior knowledge may influence perception in untrained participants, and lead to experience-driven perceptual modulation by trained artists. Copyright © 2014. Published by Elsevier Inc.

  20. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  1. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  2. A subanesthetic dose of ketamine in the Rhesus monkey reduces the occurrence of anticipatory saccades.

    PubMed

    Ameqrane, Ilhame; Ilhame, Ameqrane; Wattiez, Nicolas; Nicolas, Wattiez; Pouget, Pierre; Pierre, Pouget; Missal, Marcus; Marcus, Missal

    2015-10-01

    It has been shown that antagonism of the glutamatergic N-methyl-D-aspartate (NMDA) receptor with subanesthetic doses of ketamine perturbs the perception of elapsed time. Anticipatory eye movements are based on an internal representation of elapsed time. Therefore, the occurrence of anticipatory saccades could be a particularly sensitive indicator of abnormal time perception due to NMDA receptors blockade. The objective of this study was to determine whether the occurrence of anticipatory saccades could be selectively altered by a subanesthetic dose of ketamine. Three Rhesus monkeys were trained in a simple visually guided saccadic task with a variable delay. Monkeys were rewarded for making a visually guided saccade at the end of the delay. Premature anticipatory saccades to the future position of the eccentric target initiated before the end of the delay were not rewarded. A subanesthetic dose of ketamine (0.25 mg/kg) or a saline solution of the same volume was injected i.m. during the task. We found that the injected dose of ketamine did not induce sedation or abnormal behavior. However, in ∼4 min, ketamine induced a strong reduction of the occurrence of anticipatory saccades but did not reduce the occurrence of visually guided saccades. This unexpected reduction of anticipatory saccade occurrence could be interpreted as resulting from an altered use of the perception of elapsed time during the delay period induced by NMDA receptors antagonism.

  3. Oculomotor evidence for neocortical systems but not cerebellar dysfunction in autism

    PubMed Central

    Minshew, Nancy J.; Luna, Beatriz; Sweeney, John A.

    2010-01-01

    Objective To investigate the functional integrity of cerebellar and frontal system in autism using oculomotor paradigms. Background Cerebellar and neocortical systems models of autism have been proposed. Courchesne and colleagues have argued that cognitive deficits such as shifting attention disturbances result from dysfunction of vermal lobules VI and VII. Such a vermal deficit should be associated with dysmetric saccadic eye movements because of the major role these areas play in guiding the motor precision of saccades. In contrast, neocortical models of autism predict intact saccade metrics, but impairments on tasks requiring the higher cognitive control of saccades. Methods A total of 26 rigorously diagnosed nonmentally retarded autistic subjects and 26 matched healthy control subjects were assessed with a visually guided saccade task and two volitional saccade tasks, the oculomotor delayed-response task and the antisaccade task. Results Metrics and dynamic of the visually guided saccades were normal in autistic subjects, documenting the absence of disturbances in cerebellar vermal lobules VI and VII and in automatic shifts of visual attention. Deficits were demonstrated on both volitional saccade tasks, indicating dysfunction in the circuitry of prefrontal cortex and its connections with the parietal cortex, and associated cognitive impairments in spatial working memory and in the ability to voluntarily suppress context-inappropriate responses. Conclusions These findings demonstrate intrinsic neocortical, not cerebellar, dysfunction in autism, and parallel deficits in higher order cognitive mechanisms and not in elementary attentional and sensorimotor systems in autism. PMID:10102406

  4. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives.

    PubMed

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.

  5. Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives

    PubMed Central

    Zhong, Junpei; Cangelosi, Angelo; Wermter, Stefan

    2014-01-01

    The acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context. PMID:24550798

  6. An indoor navigation system for the visually impaired.

    PubMed

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  7. Orientation selectivity sharpens motion detection in Drosophila

    PubMed Central

    Fisher, Yvette E.; Silies, Marion; Clandinin, Thomas R.

    2015-01-01

    SUMMARY Detecting the orientation and movement of edges in a scene is critical to visually guided behaviors of many animals. What are the circuit algorithms that allow the brain to extract such behaviorally vital visual cues? Using in vivo two-photon calcium imaging in Drosophila, we describe direction selective signals in the dendrites of T4 and T5 neurons, detectors of local motion. We demonstrate that this circuit performs selective amplification of local light inputs, an observation that constrains motion detection models and confirms a core prediction of the Hassenstein-Reichardt Correlator (HRC). These neurons are also orientation selective, responding strongly to static features that are orthogonal to their preferred axis of motion, a tuning property not predicted by the HRC. This coincident extraction of orientation and direction sharpens directional tuning through surround inhibition and reveals a striking parallel between visual processing in flies and vertebrate cortex, suggesting a universal strategy for motion processing. PMID:26456048

  8. Workflows and individual differences during visually guided routine tasks in a road traffic management control room.

    PubMed

    Starke, Sandra D; Baber, Chris; Cooke, Neil J; Howes, Andrew

    2017-05-01

    Road traffic control rooms rely on human operators to monitor and interact with information presented on multiple displays. Past studies have found inconsistent use of available visual information sources in such settings across different domains. In this study, we aimed to broaden the understanding of observer behaviour in control rooms by analysing a case study in road traffic control. We conducted a field study in a live road traffic control room where five operators responded to incidents while wearing a mobile eye tracker. Using qualitative and quantitative approaches, we investigated the operators' workflow using ergonomics methods and quantified visual information sampling. We found that individuals showed differing preferences for viewing modalities and weighting of task components, with a strong coupling between eye and head movement. For the quantitative analysis of the eye tracking data, we propose a number of metrics which may prove useful to compare visual sampling behaviour across domains in future. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Initiation and inhibitory control of saccades with the progression of Parkinson's disease - changes in three major drives converging on the superior colliculus.

    PubMed

    Terao, Yasuo; Fukuda, Hideki; Yugeta, Akihiro; Hikosaka, Okihide; Nomura, Yoshiko; Segawa, Masaya; Hanajima, Ritsuko; Tsuji, Shoji; Ugawa, Yoshikazu

    2011-06-01

    The cardinal pathophysiology of Parkinson's disease (PD) is considered to be the increase in the activities of basal ganglia (BG) output nuclei, which excessively inhibits the thalamus and superior colliculus (SC) and causes preferential impairment of internal over external movements. Here we recorded saccade performance in 66 patients with PD and 87 age-matched controls, and studied how the abnormality changed with disease progression. PD patients were impaired not only in memory guided saccades, but also in visually guided saccades, beginning in the relatively early stages of the disease. On the other hand, they were impaired in suppressing reflexive saccades (saccades to cue). All these changes deteriorated with disease progression. The frequency of reflexive saccades showed a negative correlation with the latency of visually guided saccades and Unified Parkinson's Disease Rating Scale motor subscores reflecting dopaminergic function. We suggest that three major drives converging on SC determine the saccade abnormalities in PD. The impairment in visually and memory guided saccades may be caused by the excessive inhibition of the SC due to the increased BG output and the decreased activity of the frontal cortex-BG circuit. The impaired suppression of reflexive saccades may be explained if the excessive inhibition of SC is "leaky." Changes in saccade parameters suggest that frontal cortex-BG circuit activity decreases with disease progression, whereas SC inhibition stays relatively mild in comparison throughout the course of the disease. Finally, SC disinhibition due to leaky suppression may represent functional compensation from neural structures outside BG, leading to hyper-reflexivity of saccades and milder clinical symptoms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.

  11. The control of voluntary eye movements: new perspectives.

    PubMed

    Krauzlis, Richard J

    2005-04-01

    Primates use two types of voluntary eye movements to track objects of interest: pursuit and saccades. Traditionally, these two eye movements have been viewed as distinct systems that are driven automatically by low-level visual inputs. However, two sets of findings argue for a new perspective on the control of voluntary eye movements. First, recent experiments have shown that pursuit and saccades are not controlled by entirely different neural pathways but are controlled by similar networks of cortical and subcortical regions and, in some cases, by the same neurons. Second, pursuit and saccades are not automatic responses to retinal inputs but are regulated by a process of target selection that involves a basic form of decision making. The selection process itself is guided by a variety of complex processes, including attention, perception, memory, and expectation. Together, these findings indicate that pursuit and saccades share a similar functional architecture. These points of similarity may hold the key for understanding how neural circuits negotiate the links between the many higher order functions that can influence behavior and the singular and coordinated motor actions that follow.

  12. Rapid Simultaneous Enhancement of Visual Sensitivity and Perceived Contrast during Saccade Preparation

    PubMed Central

    Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086

  13. Acting without seeing: eye movements reveal visual processing without awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2015-04-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Acting without seeing: Eye movements reveal visual processing without awareness Miriam Spering & Marisa Carrasco

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2015-01-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. We review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movements. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. PMID:25765322

  15. An electrooculogram-based binary saccade sequence classification (BSSC) technique for augmentative communication and control.

    PubMed

    Keegan, Johnalan; Burke, Edward; Condron, James

    2009-01-01

    In the field of assistive technology, the electrooculogram (EOG) can be used as a channel of communication and the basis of a man-machine interface. For many people with severe motor disabilities, simple actions such as changing the TV channel require assistance. This paper describes a method of detecting saccadic eye movements and the use of a saccade sequence classification algorithm to facilitate communication and control. Saccades are fast eye movements that occurs when a person's gaze jumps from one fixation point to another. The classification is based on pre-defined sequences of saccades, guided by a static visual template (e.g. a page or poster). The template, consisting of a table of symbols each having a clearly identifiable fixation point, is situated within view of the user. To execute a particular command, the user moves his or her gaze through a pre-defined path of eye movements. This results in a well-formed sequence of saccades which are translated into a command if a match is found in a library of predefined sequences. A coordinate transformation algorithm is applied to each candidate sequence of recorded saccades to mitigate the effect of changes in the user's position and orientation relative to the visual template. Upon recognition of a saccade sequence from the library, its associated command is executed. A preliminary experiment in which two subjects were instructed to perform a series of command sequences consisting of 8 different commands are presented in the final sections. The system is also shown to be extensible to facilitate convenient text entry via an alphabetic visual template.

  16. Neural Substrates of Visual Spatial Coding and Visual Feedback Control for Hand Movements in Allocentric and Target-Directed Tasks

    PubMed Central

    Thaler, Lore; Goodale, Melvyn A.

    2011-01-01

    Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474

  17. Gravity and perceptual stability during translational head movement on earth and in microgravity.

    PubMed

    Jaekl, P; Zikovitz, D C; Jenkin, M R; Jenkin, H L; Zacher, J E; Harris, L R

    2005-01-01

    We measured the amount of visual movement judged consistent with translational head movement under normal and microgravity conditions. Subjects wore a virtual reality helmet in which the ratio of the movement of the world to the movement of the head (visual gain) was variable. Using the method of adjustment under normal gravity 10 subjects adjusted the visual gain until the visual world appeared stable during head movements that were either parallel or orthogonal to gravity. Using the method of constant stimuli under normal gravity, seven subjects moved their heads and judged whether the virtual world appeared to move "with" or "against" their movement for several visual gains. One subject repeated the constant stimuli judgements in microgravity during parabolic flight. The accuracy of judgements appeared unaffected by the direction or absence of gravity. Only the variability appeared affected by the absence of gravity. These results are discussed in relation to discomfort during head movements in microgravity. c2005 Elsevier Ltd. All rights reserved.

  18. Comparing visual search and eye movements in bilinguals and monolinguals

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.

    2017-01-01

    Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116

  19. The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia.

    PubMed

    Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C; Wong, Agnes M F

    2016-04-01

    Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades.

  20. Combined visual illusion effects on the perceived index of difficulty and movement outcomes in discrete and continuous fitts' tapping.

    PubMed

    Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin

    2016-01-01

    The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.

  1. Properties of visual evoked potentials to onset of movement on a television screen.

    PubMed

    Kubová, Z; Kuba, M; Hubacek, J; Vít, F

    1990-08-01

    In 80 subjects the dependence of movement-onset visual evoked potentials on some measures of stimulation was examined, and these responses were compared with pattern-reversal visual evoked potentials to verify the effectiveness of pattern movement application for visual evoked potential acquisition. Horizontally moving vertical gratings were generated on a television screen. The typical movement-onset reactions were characterized by one marked negative peak only, with a peak time between 140 and 200 ms. In all subjects the sufficient stimulus duration for acquisition of movement-onset-related visual evoked potentials was 100 ms; in some cases it was only 20 ms. Higher velocity (5.6 degree/s) produced higher amplitudes of movement-onset visual evoked potentials than did the lower velocity (2.8 degrees/s). In 80% of subjects, the more distinct reactions were found in the leads from lateral occipital areas (in 60% from the right hemisphere), with no correlation to handedness of subjects. Unlike pattern-reversal visual evoked potentials, the movement-onset responses tended to be larger to extramacular stimulation (annular target of 5 degrees-9 degrees) than to macular stimulation (circular target of 5 degrees diameter).

  2. Visuomotor Dissociation in Cerebral Scaling of Size.

    PubMed

    Potgieser, Adriaan R E; de Jong, Bauke M

    2016-01-01

    Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity) or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity). These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8) revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.

  3. Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts.

    PubMed

    Wilson, Mark; McGrath, John; Vine, Samuel; Brewer, James; Defriend, David; Masters, Richard

    2010-10-01

    Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed.

  4. Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts

    PubMed Central

    McGrath, John; Vine, Samuel; Brewer, James; Defriend, David; Masters, Richard

    2010-01-01

    Background Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. Methods A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. Results The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. Conclusion The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed. PMID:20333405

  5. The effect of aborting ongoing movements on end point position estimation.

    PubMed

    Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi

    2013-11-01

    The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.

  6. Interventional MRI-guided catheter placement and real time drug delivery to the central nervous system.

    PubMed

    Han, Seunggu J; Bankiewicz, Krystof; Butowski, Nicholas A; Larson, Paul S; Aghi, Manish K

    2016-06-01

    Local delivery of therapeutic agents into the brain has many advantages; however, the inability to predict, visualize and confirm the infusion into the intended target has been a major hurdle in its clinical development. Here, we describe the current workflow and application of the interventional MRI (iMRI) system for catheter placement and real time visualization of infusion. We have applied real time convection-enhanced delivery (CED) of therapeutic agents with iMRI across a number of different clinical trials settings in neuro-oncology and movement disorders. Ongoing developments and accumulating experience with the technique and technology of drug formulations, CED platforms, and iMRI systems will continue to make local therapeutic delivery into the brain more accurate, efficient, effective and safer.

  7. Gaze anchoring guides real but not pantomime reach-to-grasp: support for the action-perception theory.

    PubMed

    Kuntz, Jessica R; Karl, Jenni M; Doan, Jon B; Whishaw, Ian Q

    2018-04-01

    Reach-to-grasp movements feature the integration of a reach directed by the extrinsic (location) features of a target and a grasp directed by the intrinsic (size, shape) features of a target. The action-perception theory suggests that integration and scaling of a reach-to-grasp movement, including its trajectory and the concurrent digit shaping, are features that depend upon online action pathways of the dorsal visuomotor stream. Scaling is much less accurate for a pantomime reach-to-grasp movement, a pretend reach with the target object absent. Thus, the action-perception theory proposes that pantomime movement is mediated by perceptual pathways of the ventral visuomotor stream. A distinguishing visual feature of a real reach-to-grasp movement is gaze anchoring, in which a participant visually fixates the target throughout the reach and disengages, often by blinking or looking away/averting the head, at about the time that the target is grasped. The present study examined whether gaze anchoring is associated with pantomime reaching. The eye and hand movements of participants were recorded as they reached for a ball of one of three sizes, located on a pedestal at arms' length, or pantomimed the same reach with the ball and pedestal absent. The kinematic measures for real reach-to-grasp movements were coupled to the location and size of the target, whereas the kinematic measures for pantomime reach-to-grasp, although grossly reflecting target features, were significantly altered. Gaze anchoring was also tightly coupled to the target for real reach-to-grasp movements, but there was no systematic focus for gaze, either in relation with the virtual target, the previous location of the target, or the participant's reaching hand, for pantomime reach-to-grasp. The presence of gaze anchoring during real vs. its absence in pantomime reach-to-grasp supports the action-perception theory that real, but not pantomime, reaches are online visuomotor actions and is discussed in relation with the neural control of real and pantomime reach-to-grasp movements.

  8. Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station

    NASA Technical Reports Server (NTRS)

    Bendrick, Gregg A.; Kamine, Tovy Haber

    2008-01-01

    Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. "cones") of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement" (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Methods: Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. Results: The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of "Maximum Eye Movement". However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of "Easy Eye Movement", though all were within the cone of "Maximum Eye Movement". All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Discussion: Most instrument displays in conventional aircraft lay within the cone of "Easy Eye Movement", though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight.

  9. Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station

    NASA Technical Reports Server (NTRS)

    Kamine, Tovy Haber; Bendrick, Gregg A.

    2008-01-01

    Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. cones ) of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of Maximum Eye Movement. However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of Easy Eye Movement, though all were within the cone of Maximum Eye Movement. All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Most instrument displays in conventional aircraft lay within the cone of Easy Eye Movement, though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight. The learning objectives include: 1) Know three physiologic cones of eye/head movement; 2) Understand how instrument displays comply with these design principles in conventional aircraft and an uninhabited aerial vehicle system. Which of the following is NOT a recognized physiologic principle of instrument display design? Cone of Easy Eye Movement 2) Cone of Binocular Eye Movement 3) Cone of Maximum Eye Movement 4) Cone of Head Movement 5) None of the above. Answer: # 2) Cone of Binocular Eye Movement

  10. Experience, Context, and the Visual Perception of Human Movement

    ERIC Educational Resources Information Center

    Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie

    2004-01-01

    Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…

  11. Bilateral Activity-Dependent Interactions in the Developing Corticospinal System

    PubMed Central

    Friel, Kathleen M.; Martin, John H.

    2009-01-01

    Activity-dependent competition between the corticospinal (CS) systems in each hemisphere drives postnatal development of motor skills and stable CS tract connections with contralateral spinal motor circuits. Unilateral restriction of motor cortex (M1) activity during an early postnatal critical period impairs contralateral visually guided movements later in development and in maturity. Silenced M1 develops aberrant connections with the contralateral spinal cord whereas the initially active M1, in the other hemisphere, develops bilateral connections. In this study, we determined whether the aberrant pattern of CS tract terminations and motor impairments produced by early postnatal M1 activity restriction could be abrogated by reducing activity-dependent synaptic competition from the initially active M1 later in development. We first inactivated M1 unilaterally between postnatal weeks 5–7. We next inactivated M1 on the other side from weeks 7–11 (alternate inactivation), to reduce the competitive advantage that this side may have over the initially inactivated side. Alternate inactivation redirected aberrant contralateral CS tract terminations from the initially silenced M1 to their normal spinal territories and reduced the density of aberrant ipsilateral terminations from the initially active side. Normal movement endpoint control during visually guided locomotion was fully restored. This reorganization of CS terminals reveals an unsuspected late plasticity after the critical period for establishing the pattern of CS terminations in the spinal cord. Our findings show that robust bilateral interactions between the developing CS systems on each side are important for achieving balance between contralateral and ipsilateral CS tract connections and visuomotor control. PMID:17928450

  12. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    PubMed

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  13. The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task

    PubMed Central

    Roverud, Elin; Streeter, Timothy; Mason, Christine R.; Kidd, Gerald

    2017-01-01

    The aim of this study was to evaluate the performance of a visually guided hearing aid (VGHA) under conditions designed to capture some aspects of “real-world” communication settings. The VGHA uses eye gaze to steer the acoustic look direction of a highly directional beamforming microphone array. Although the VGHA has been shown to enhance speech intelligibility for fixed-location, frontal targets, it is currently not known whether these benefits persist in the face of frequent changes in location of the target talker that are typical of conversational turn-taking. Participants were 14 young adults, 7 with normal hearing and 7 with bilateral sensorineural hearing impairment. Target stimuli were sequences of 12 question–answer pairs that were embedded in a mixture of competing conversations. The participant’s task was to respond via a key press after each answer indicating whether it was correct or not. Spatialization of the stimuli and microphone array processing were done offline using recorded impulse responses, before presentation over headphones. The look direction of the array was steered according to the eye movements of the participant as they followed a visual cue presented on a widescreen monitor. Performance was compared for a “dynamic” condition in which the target stimulus moved between three locations, and a “fixed” condition with a single target location. The benefits of the VGHA over natural binaural listening observed in the fixed condition were reduced in the dynamic condition, largely because visual fixation was less accurate. PMID:28758567

  14. Image-guided robotic surgery.

    PubMed

    Marescaux, Jacques; Solerc, Luc

    2004-06-01

    Medical image processing leads to an improvement in patient care by guiding the surgical gesture. Three-dimensional models of patients that are generated from computed tomographic scans or magnetic resonance imaging allow improved surgical planning and surgical simulation that offers the opportunity for a surgeon to train the surgical gesture before performing it for real. These two preoperative steps can be used intra-operatively because of the development of augmented reality, which consists of superimposing the preoperative three-dimensional model of the patient onto the real intraoperative view. Augmented reality provides the surgeon with a view of the patient in transparency and can also guide the surgeon, thanks to the real-time tracking of surgical tools during the procedure. When adapted to robotic surgery, this tool tracking enables visual serving with the ability to automatically position and control surgical robotic arms in three dimensions. It is also now possible to filter physiologic movements such as breathing or the heart beat. In the future, by combining augmented reality and robotics, these image-guided robotic systems will enable automation of the surgical procedure, which will be the next revolution in surgery.

  15. Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search

    PubMed Central

    Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.

    2012-01-01

    Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511

  16. On the possible roles of microsaccades and drifts in visual perception.

    PubMed

    Ahissar, Ehud; Arieli, Amos; Fried, Moshe; Bonneh, Yoram

    2016-01-01

    During natural viewing large saccades shift the visual gaze from one target to another every few hundreds of milliseconds. The role of microsaccades (MSs), small saccades that show up during long fixations, is still debated. A major debate is whether MSs are used to redirect the visual gaze to a new location or to encode visual information through their movement. We argue that these two functions cannot be optimized simultaneously and present several pieces of evidence suggesting that MSs redirect the visual gaze and that the visual details are sampled and encoded by ocular drifts. We show that drift movements are indeed suitable for visual encoding. Yet, it is not clear to what extent drift movements are controlled by the visual system, and to what extent they interact with saccadic movements. We analyze several possible control schemes for saccadic and drift movements and propose experiments that can discriminate between them. We present the results of preliminary analyses of existing data as a sanity check to the testability of our predictions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Eye movement dysfunction in first-degree relatives of patients with schizophrenia: a meta-analytic evaluation of candidate endophenotypes.

    PubMed

    Calkins, Monica E; Iacono, William G; Ones, Deniz S

    2008-12-01

    Several forms of eye movement dysfunction (EMD) are regarded as promising candidate endophenotypes of schizophrenia. Discrepancies in individual study results have led to inconsistent conclusions regarding particular aspects of EMD in relatives of schizophrenia patients. To quantitatively evaluate and compare the candidacy of smooth pursuit, saccade and fixation deficits in first-degree biological relatives, we conducted a set of meta-analytic investigations. Among 18 measures of EMD, memory-guided saccade accuracy and error rate, global smooth pursuit dysfunction, intrusive saccades during fixation, antisaccade error rate and smooth pursuit closed-loop gain emerged as best differentiating relatives from controls (standardized mean differences ranged from .46 to .66), with no significant differences among these measures. Anticipatory saccades, but no other smooth pursuit component measures were also increased in relatives. Visually-guided reflexive saccades were largely normal. Moderator analyses examining design characteristics revealed few variables affecting the magnitude of the meta-analytically observed effects. Moderate effect sizes of relatives v. controls in selective aspects of EMD supports their endophenotype potential. Future work should focus on facilitating endophenotype utility through attention to heterogeneity of EMD performance, relationships among forms of EMD, and application in molecular genetics studies.

  18. Ultrasound-guided platelet-rich plasma injection for distal biceps tendinopathy.

    PubMed

    Barker, Scott L; Bell, Simon N; Connell, David; Coghlan, Jennifer A

    2015-04-01

    Distal biceps tendinopathy is an uncommon cause of elbow pain. The optimum treatment for cases refractory to conservative treatment is unclear. Platelet-rich plasma has been used successfully for other tendinopathies around the elbow. Six patients with clinical and radiological evidence of distal biceps tendinopathy underwent ultrasound-guided platelet-rich plasma (PRP) injection. Clinical examination findings, visual analogue score (VAS) for pain and Mayo Elbow Performance scores were recorded. The Mayo Elbow Performance Score improved from 68.3 (range 65 to 85) (fair function) to 95 (range 85 to 100) (excellent function). The VAS at rest improved from a mean of 2.25 (range 2 to 5) pre-injection to 0. The VAS with movement improved from a mean of 7.25 (range 5 to 8) pre-injection to 1.3 (range 0 to 2). No complications were noted. Ultrasound-guided PRP injection appears to be a safe and effective treatment for recalcitrant cases of distal biceps tendinopathy. Further investigation with a randomized controlled trial is needed to fully assess its efficacy.

  19. Lateral eye-movement responses to visual stimuli.

    PubMed

    Wilbur, M P; Roberts-Wilbur, J

    1985-08-01

    The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.

  20. Asymmetries in the Control of Saccadic Eye Movements to Bifurcating Targets.

    ERIC Educational Resources Information Center

    Zeevi, Yehoshua Y.; And Others

    The examination of saccadic eye movements--rapid shifts in gaze from one visual area of interest to another--is useful in studying pilot's visual learning in flight simulator training. Saccadic eye movements are the basic oculomotor response associated with the acquisition of visual information and provide an objective measure of higher perceptual…

  1. Hawk Eyes II: Diurnal Raptors Differ in Head Movement Strategies When Scanning from Perches

    PubMed Central

    O'Rourke, Colleen T.; Pitlik, Todd; Hoover, Melissa; Fernández-Juricic, Esteban

    2010-01-01

    Background Relatively little is known about the degree of inter-specific variability in visual scanning strategies in species with laterally placed eyes (e.g., birds). This is relevant because many species detect prey while perching; therefore, head movement behavior may be an indicator of prey detection rate, a central parameter in foraging models. We studied head movement strategies in three diurnal raptors belonging to the Accipitridae and Falconidae families. Methodology/Principal Findings We used behavioral recording of individuals under field and captive conditions to calculate the rate of two types of head movements and the interval between consecutive head movements. Cooper's Hawks had the highest rate of regular head movements, which can facilitate tracking prey items in the visually cluttered environment they inhabit (e.g., forested habitats). On the other hand, Red-tailed Hawks showed long intervals between consecutive head movements, which is consistent with prey searching in less visually obstructed environments (e.g., open habitats) and with detecting prey movement from a distance with their central foveae. Finally, American Kestrels have the highest rates of translational head movements (vertical or frontal displacements of the head keeping the bill in the same direction), which have been associated with depth perception through motion parallax. Higher translational head movement rates may be a strategy to compensate for the reduced degree of eye movement of this species. Conclusions Cooper's Hawks, Red-tailed Hawks, and American Kestrels use both regular and translational head movements, but to different extents. We conclude that these diurnal raptors have species-specific strategies to gather visual information while perching. These strategies may optimize prey search and detection with different visual systems in habitat types with different degrees of visual obstruction. PMID:20877650

  2. Hawk eyes II: diurnal raptors differ in head movement strategies when scanning from perches.

    PubMed

    O'Rourke, Colleen T; Pitlik, Todd; Hoover, Melissa; Fernández-Juricic, Esteban

    2010-09-22

    Relatively little is known about the degree of inter-specific variability in visual scanning strategies in species with laterally placed eyes (e.g., birds). This is relevant because many species detect prey while perching; therefore, head movement behavior may be an indicator of prey detection rate, a central parameter in foraging models. We studied head movement strategies in three diurnal raptors belonging to the Accipitridae and Falconidae families. We used behavioral recording of individuals under field and captive conditions to calculate the rate of two types of head movements and the interval between consecutive head movements. Cooper's Hawks had the highest rate of regular head movements, which can facilitate tracking prey items in the visually cluttered environment they inhabit (e.g., forested habitats). On the other hand, Red-tailed Hawks showed long intervals between consecutive head movements, which is consistent with prey searching in less visually obstructed environments (e.g., open habitats) and with detecting prey movement from a distance with their central foveae. Finally, American Kestrels have the highest rates of translational head movements (vertical or frontal displacements of the head keeping the bill in the same direction), which have been associated with depth perception through motion parallax. Higher translational head movement rates may be a strategy to compensate for the reduced degree of eye movement of this species. Cooper's Hawks, Red-tailed Hawks, and American Kestrels use both regular and translational head movements, but to different extents. We conclude that these diurnal raptors have species-specific strategies to gather visual information while perching. These strategies may optimize prey search and detection with different visual systems in habitat types with different degrees of visual obstruction.

  3. Fixation Biases towards the Index Finger in Almost-Natural Grasping

    PubMed Central

    Voudouris, Dimitris; Smeets, Jeroen B. J.; Brenner, Eli

    2016-01-01

    We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias. PMID:26766551

  4. Hawk Eyes I: Diurnal Raptors Differ in Visual Fields and Degree of Eye Movement

    PubMed Central

    O'Rourke, Colleen T.; Hall, Margaret I.; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-01-01

    Background Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. Methodology/Principal Findings We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. Conclusions We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while perching. PMID:20877645

  5. Hawk eyes I: diurnal raptors differ in visual fields and degree of eye movement.

    PubMed

    O'Rourke, Colleen T; Hall, Margaret I; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-09-22

    Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while perching.

  6. Spatiotemporal dynamics of brain activity during the transition from visually guided to memory-guided force control

    PubMed Central

    Poon, Cynthia; Chin-Cottongim, Lisa G.; Coombes, Stephen A.; Corcos, Daniel M.

    2012-01-01

    It is well established that the prefrontal cortex is involved during memory-guided tasks whereas visually guided tasks are controlled in part by a frontal-parietal network. However, the nature of the transition from visually guided to memory-guided force control is not as well established. As such, this study examines the spatiotemporal pattern of brain activity that occurs during the transition from visually guided to memory-guided force control. We measured 128-channel scalp electroencephalography (EEG) in healthy individuals while they performed a grip force task. After visual feedback was removed, the first significant change in event-related activity occurred in the left central region by 300 ms, followed by changes in prefrontal cortex by 400 ms. Low-resolution electromagnetic tomography (LORETA) was used to localize the strongest activity to the left ventral premotor cortex and ventral prefrontal cortex. A second experiment altered visual feedback gain but did not require memory. In contrast to memory-guided force control, altering visual feedback gain did not lead to early changes in the left central and midline prefrontal regions. Decreasing the spatial amplitude of visual feedback did lead to changes in the midline central region by 300 ms, followed by changes in occipital activity by 400 ms. The findings show that subjects rely on sensorimotor memory processes involving left ventral premotor cortex and ventral prefrontal cortex after the immediate transition from visually guided to memory-guided force control. PMID:22696535

  7. The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia

    PubMed Central

    Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C.; Wong, Agnes M. F.

    2016-01-01

    Purpose Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Methods Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Results Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). Conclusions This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades. PMID:27070109

  8. Planning Ahead: Object-Directed Sequential Actions Decoded from Human Frontoparietal and Occipitotemporal Networks

    PubMed Central

    Gallivan, Jason P.; Johnsrude, Ingrid S.; Randall Flanagan, J.

    2016-01-01

    Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions—those that occur after an object has been grasped—are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements. PMID:25576538

  9. CUE: counterfeit-resistant usable eye movement-based authentication via oculomotor plant characteristics and complex eye movement patterns

    NASA Astrophysics Data System (ADS)

    Komogortsev, Oleg V.; Karpov, Alexey; Holland, Corey D.

    2012-06-01

    The widespread use of computers throughout modern society introduces the necessity for usable and counterfeit-resistant authentication methods to ensure secure access to personal resources such as bank accounts, e-mail, and social media. Current authentication methods require tedious memorization of lengthy pass phrases, are often prone to shouldersurfing, and may be easily replicated (either by counterfeiting parts of the human body or by guessing an authentication token based on readily available information). This paper describes preliminary work toward a counterfeit-resistant usable eye movement-based (CUE) authentication method. CUE does not require any passwords (improving the memorability aspect of the authentication system), and aims to provide high resistance to spoofing and shoulder-surfing by employing the combined biometric capabilities of two behavioral biometric traits: 1) oculomotor plant characteristics (OPC) which represent the internal, non-visible, anatomical structure of the eye; 2) complex eye movement patterns (CEM) which represent the strategies employed by the brain to guide visual attention. Both OPC and CEM are extracted from the eye movement signal provided by an eye tracking system. Preliminary results indicate that the fusion of OPC and CEM traits is capable of providing a 30% reduction in authentication error when compared to the authentication accuracy of individual traits.

  10. The analysis of the influence of fractal structure of stimuli on fractal dynamics in fixational eye movements and EEG signal

    NASA Astrophysics Data System (ADS)

    Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin

    2016-05-01

    One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.

  11. Impact of post-stroke unilateral spatial neglect on goal-directed arm movements: systematic literature review.

    PubMed

    Ogourtsova, Tatiana; Archambault, Philippe; Lamontagne, Anouk

    2015-12-01

    Unilateral spatial neglect (USN), a highly prevalent post-stroke impairment, refers to one's inability to orient or respond to stimuli located in the contralesional visual hemispace. Unilateral spatial neglect has been shown to strongly affect motor performance in functional activities, including non-affected upper extremity (UE) movements. To date, our understanding of the effects of USN on goal-directed UE movements is limited and comparing performance of individuals post-stroke with and without USN is required. To determine, in individuals with stroke, how does the presence of USN, in comparison to the absence of USN, impacts different types of goal-directed movements of the non-affected UE. The present review approach consisted of a comprehensive literature search, an assessment of the quality of the selected studies and qualitative data analysis. A total of 20 studies of moderate to high quality were selected. The USN-specific impairments were found in tasks that required a perceptual, memory-guided or delayed actions, and fewer impairments were found in tasks that required an immediate action to a predefined target. The results indicate that USN contributes to deficits observed in action execution with the non-effected UE that requires greater perceptual demands.

  12. Online control of reaching and pointing to visual, auditory, and multimodal targets: Effects of target modality and method of determining correction latency.

    PubMed

    Holmes, Nicholas P; Dakwar, Azar R

    2015-12-01

    Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  14. Development of a novel visuomotor integration paradigm by integrating a virtual environment with mobile eye-tracking and motion-capture systems

    PubMed Central

    Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.

    2018-01-01

    Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370

  15. An Indoor Navigation System for the Visually Impaired

    PubMed Central

    Guerrero, Luis A.; Vasquez, Francisco; Ochoa, Sergio F.

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment. PMID:22969398

  16. Detecting delay in visual feedback of an action as a monitor of self recognition.

    PubMed

    Hoover, Adria E N; Harris, Laurence R

    2012-10-01

    How do we distinguish "self" from "other"? The correlation between willing an action and seeing it occur is an important cue. We exploited the fact that this correlation needs to occur within a restricted temporal window in order to obtain a quantitative assessment of when a body part is identified as "self". We measured the threshold and sensitivity (d') for detecting a delay between movements of the finger (of both the dominant and non-dominant hands) and visual feedback as seen from four visual perspectives (the natural view, and mirror-reversed and/or inverted views). Each trial consisted of one presentation with minimum delay and another with a delay of between 33 and 150 ms. Participants indicated which presentation contained the delayed view. We varied the amount of efference copy available for this task by comparing performances for discrete movements and continuous movements. Discrete movements are associated with a stronger efference copy. Sensitivity to detect asynchrony between visual and proprioceptive information was significantly higher when movements were viewed from a "plausible" self perspective compared with when the view was reversed or inverted. Further, we found differences in performance between dominant and non-dominant hand finger movements across the continuous and single movements. Performance varied with the viewpoint from which the visual feedback was presented and on the efferent component such that optimal performance was obtained when the presentation was in the normal natural orientation and clear efferent information was available. Variations in sensitivity to visual/non-visual temporal incongruence with the viewpoint in which a movement is seen may help determine the arrangement of the underlying visual representation of the body.

  17. Audio-Visual Stimulation in Conjunction with Functional Electrical Stimulation to Address Upper Limb and Lower Limb Movement Disorder.

    PubMed

    Kumar, Deepesh; Verma, Sunny; Bhattacharya, Sutapa; Lahiri, Uttama

    2016-06-13

    Neurological disorders often manifest themselves in the form of movement deficit on the part of the patient. Conventional rehabilitation often used to address these deficits, though powerful are often monotonous in nature. Adequate audio-visual stimulation can prove to be motivational. In the research presented here we indicate the applicability of audio-visual stimulation to rehabilitation exercises to address at least some of the movement deficits for upper and lower limbs. Added to the audio-visual stimulation, we also use Functional Electrical Stimulation (FES). In our presented research we also show the applicability of FES in conjunction with audio-visual stimulation delivered through VR-based platform for grasping skills of patients with movement disorder.

  18. Behavioral and neural effects of congruency of visual feedback during short-term motor learning.

    PubMed

    Ossmy, Ori; Mukamel, Roy

    2018-05-15

    Visual feedback can facilitate or interfere with movement execution. Here, we describe behavioral and neural mechanisms by which the congruency of visual feedback during physical practice of a motor skill modulates subsequent performance gains. 18 healthy subjects learned to execute rapid sequences of right hand finger movements during fMRI scans either with or without visual feedback. Feedback consisted of a real-time, movement-based display of virtual hands that was either congruent (right virtual hand movement), or incongruent (left virtual hand movement yoked to the executing right hand). At the group level, right hand performance gains following training with congruent visual feedback were significantly higher relative to training without visual feedback. Conversely, performance gains following training with incongruent visual feedback were significantly lower. Interestingly, across individual subjects these opposite effects correlated. Activation in the Supplementary Motor Area (SMA) during training corresponded to individual differences in subsequent performance gains. Furthermore, functional coupling of SMA with visual cortices predicted individual differences in behavior. Our results demonstrate that some individuals are more sensitive than others to congruency of visual feedback during short-term motor learning and that neural activation in SMA correlates with such inter-individual differences. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Sensory Agreement Guides Kinetic Energy Optimization of Arm Movements during Object Manipulation.

    PubMed

    Farshchiansadegh, Ali; Melendez-Calderon, Alejandro; Ranganathan, Rajiv; Murphey, Todd D; Mussa-Ivaldi, Ferdinando A

    2016-04-01

    The laws of physics establish the energetic efficiency of our movements. In some cases, like locomotion, the mechanics of the body dominate in determining the energetically optimal course of action. In other tasks, such as manipulation, energetic costs depend critically upon the variable properties of objects in the environment. Can the brain identify and follow energy-optimal motions when these motions require moving along unfamiliar trajectories? What feedback information is required for such optimal behavior to occur? To answer these questions, we asked participants to move their dominant hand between different positions while holding a virtual mechanical system with complex dynamics (a planar double pendulum). In this task, trajectories of minimum kinetic energy were along curvilinear paths. Our findings demonstrate that participants were capable of finding the energy-optimal paths, but only when provided with veridical visual and haptic information pertaining to the object, lacking which the trajectories were executed along rectilinear paths.

  20. Attention maintains mental extrapolation of target position: irrelevant distractors eliminate forward displacement after implied motion.

    PubMed

    Kerzel, Dirk

    2003-05-01

    Observers' judgments of the final position of a moving target are typically shifted in the direction of implied motion ("representational momentum"). The role of attention is unclear: visual attention may be necessary to maintain or halt target displacement. When attention was captured by irrelevant distractors presented during the retention interval, forward displacement after implied target motion disappeared, suggesting that attention may be necessary to maintain mental extrapolation of target motion. In a further corroborative experiment, the deployment of attention was measured after a sequence of implied motion, and faster responses were observed to stimuli appearing in the direction of motion. Thus, attention may guide the mental extrapolation of target motion. Additionally, eye movements were measured during stimulus presentation and retention interval. The results showed that forward displacement with implied motion does not depend on eye movements. Differences between implied and smooth motion are discussed with respect to recent neurophysiological findings.

  1. Rhesus monkeys (Macaca mulatta), video tasks, and implications for stimulus-response spatial contiguity

    NASA Technical Reports Server (NTRS)

    Rumbaugh, Duane M.; Richardson, W. Kirk; Washburn, David A.; Hopkins, William D.; Savage-Rumbaugh, E. Sue

    1989-01-01

    Recent reports support the argument that the efficiency of primate learning is compromised to the degree that there is spatial discontiguity between discriminands and the locus of response. Experiments are reported here in which two rhesus monkeys easily mastered precise control of a joystick to respond to a variety of computer-generated targets despite the fact that the joystick was located 9 to 18 cm from the video screen. It is argued that stimulus-response contiguity is a significant parameter of learning only to the degree that the monkey visually attends to the directional movements of its hand in order to displace discriminands. If attention is focused on the effects of the hand's movement rather than on the hand itself, stimulus-response contiguity is no longer a primary parameter of learning. The implications of these results for mirror-guided studies are discussed.

  2. The innate responses of bumble bees to flower patterns: separating the nectar guide from the nectary changes bee movements and search time

    NASA Astrophysics Data System (ADS)

    Goodale, Eben; Kim, Edward; Nabors, Annika; Henrichon, Sara; Nieh, James C.

    2014-06-01

    Nectar guides can enhance pollinator efficiency and plant fitness by allowing pollinators to more rapidly find and remember the location of floral nectar. We tested if a radiating nectar guide around a nectary would enhance the ability of naïve bumble bee foragers to find nectar. Most experiments that test nectar guide efficacy, specifically radiating linear guides, have used guides positioned around the center of a radially symmetric flower, where nectaries are often found. However, the flower center may be intrinsically attractive. We therefore used an off-center guide and nectary and compared "conjunct" feeders with a nectar guide surrounding the nectary to "disjunct" feeders with a nectar guide separated from the nectary. We focused on the innate response of novice bee foragers that had never previously visited such feeders. We hypothesized that a disjunct nectar guide would conflict with the visual information provided by the nectary and negatively affect foraging. Approximately, equal numbers of bumble bees ( Bombus impatiens) found nectar on both feeder types. On disjunct feeders, however, unsuccessful foragers spent significantly more time (on average 1.6-fold longer) searching for nectar than any other forager group. Successful foragers on disjunct feeders approached these feeders from random directions unlike successful foragers on conjunct feeders, which preferentially approached the combined nectary and nectar guide. Thus, the nectary and a surrounding nectar guide can be considered a combination of two signals that attract naïve foragers even when not in the floral center.

  3. The innate responses of bumble bees to flower patterns: separating the nectar guide from the nectary changes bee movements and search time.

    PubMed

    Goodale, Eben; Kim, Edward; Nabors, Annika; Henrichon, Sara; Nieh, James C

    2014-06-01

    Nectar guides can enhance pollinator efficiency and plant fitness by allowing pollinators to more rapidly find and remember the location of floral nectar. We tested if a radiating nectar guide around a nectary would enhance the ability of naïve bumble bee foragers to find nectar. Most experiments that test nectar guide efficacy, specifically radiating linear guides, have used guides positioned around the center of a radially symmetric flower, where nectaries are often found. However, the flower center may be intrinsically attractive. We therefore used an off-center guide and nectary and compared "conjunct" feeders with a nectar guide surrounding the nectary to "disjunct" feeders with a nectar guide separated from the nectary. We focused on the innate response of novice bee foragers that had never previously visited such feeders. We hypothesized that a disjunct nectar guide would conflict with the visual information provided by the nectary and negatively affect foraging. Approximately, equal numbers of bumble bees (Bombus impatiens) found nectar on both feeder types. On disjunct feeders, however, unsuccessful foragers spent significantly more time (on average 1.6-fold longer) searching for nectar than any other forager group. Successful foragers on disjunct feeders approached these feeders from random directions unlike successful foragers on conjunct feeders, which preferentially approached the combined nectary and nectar guide. Thus, the nectary and a surrounding nectar guide can be considered a combination of two signals that attract naïve foragers even when not in the floral center.

  4. The Effects of Mirror Feedback during Target Directed Movements on Ipsilateral Corticospinal Excitability

    PubMed Central

    Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei V.; Tunik, Eugene

    2017-01-01

    Mirror visual feedback (MVF) training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1) excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror) and presence of a visual target (target present, target absent) for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS) was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs) in the untrained first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4). Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability. PMID:28553218

  5. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  6. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  7. Area 18 of the cat: the first step in processing visual movement information.

    PubMed

    Orban, G A

    1977-01-01

    In cats, responses of area 18 neurons to different moving patterns were measured. The influence of three movement parameters--direction, angular velocity, and amplitude of movement--were tested. The results indicate that in area 18 no ideal movement detector exists, but that simple and complex cells each perform complementary operations of primary visual areas, i.e. analysis and detection of movement.

  8. Neural learning rules for the vestibulo-ocular reflex

    NASA Technical Reports Server (NTRS)

    Raymond, J. L.; Lisberger, S. G.

    1998-01-01

    Mechanisms for the induction of motor learning in the vestibulo-ocular reflex (VOR) were evaluated by recording the patterns of neural activity elicited in the cerebellum by a range of stimuli that induce learning. Patterns of climbing-fiber, vestibular, and Purkinje cell simple-spike signals were examined during sinusoidal head movement paired with visual image movement at stimulus frequencies from 0.5 to 10 Hz. A comparison of simple-spike and vestibular signals contained the information required to guide learning only at low stimulus frequencies, and a comparison of climbing-fiber and simple-spike signals contained the information required to guide learning only at high stimulus frequencies. Learning could be guided by comparison of climbing-fiber and vestibular signals at all stimulus frequencies tested, but only if climbing fiber responses were compared with the vestibular signals present 100 msec earlier. Computational analysis demonstrated that this conclusion is valid even if there is a broad range of vestibular signals at the site of plasticity. Simulations also indicated that the comparison of vestibular and climbing-fiber signals across the 100 msec delay must be implemented by a subcellular "eligibility" trace rather than by neural circuits that delay the vestibular inputs to the site of plasticity. The results suggest two alternative accounts of learning in the VOR. Either there are multiple mechanisms of learning that use different combinations of neural signals to drive plasticity, or there is a single mechanism tuned to climbing-fiber activity that follows activity in vestibular pathways by approximately 100 msec.

  9. Mapping multisensory parietal face and body areas in humans.

    PubMed

    Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I

    2012-10-30

    Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.

  10. Neural correlates of tactile perception during pre-, peri-, and post-movement.

    PubMed

    Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte

    2016-05-01

    Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.

  11. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    PubMed

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.

  12. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion

    PubMed Central

    Smith, Ross T.; Hunter, Estin V.; Davis, Miles G.; Sterling, Michele; Moseley, G. Lorimer

    2017-01-01

    Background Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can’t be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. Method In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%–200%—the Motor Offset Visual Illusion (MoOVi)—thus simulating more or less movement than that actually occurring. At 50o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360o immersive virtual reality with and without three-dimensional properties, was also investigated. Results Perception of head movement was dependent on visual-kinaesthetic feedback (p = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Discussion Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain. PMID:28243537

  13. Using visuo-kinetic virtual reality to induce illusory spinal movement: the MoOVi Illusion.

    PubMed

    Harvie, Daniel S; Smith, Ross T; Hunter, Estin V; Davis, Miles G; Sterling, Michele; Moseley, G Lorimer

    2017-01-01

    Illusions that alter perception of the body provide novel opportunities to target brain-based contributions to problems such as persistent pain. One example of this, mirror therapy, uses vision to augment perceived movement of a painful limb to treat pain. Since mirrors can't be used to induce augmented neck or other spinal movement, we aimed to test whether such an illusion could be achieved using virtual reality, in advance of testing its potential therapeutic benefit. We hypothesised that perceived head rotation would depend on visually suggested movement. In a within-subjects repeated measures experiment, 24 healthy volunteers performed neck movements to 50 o of rotation, while a virtual reality system delivered corresponding visual feedback that was offset by a factor of 50%-200%-the Motor Offset Visual Illusion (MoOVi)-thus simulating more or less movement than that actually occurring. At 50 o of real-world head rotation, participants pointed in the direction that they perceived they were facing. The discrepancy between actual and perceived direction was measured and compared between conditions. The impact of including multisensory (auditory and visual) feedback, the presence of a virtual body reference, and the use of 360 o immersive virtual reality with and without three-dimensional properties, was also investigated. Perception of head movement was dependent on visual-kinaesthetic feedback ( p  = 0.001, partial eta squared = 0.17). That is, altered visual feedback caused a kinaesthetic drift in the direction of the visually suggested movement. The magnitude of the drift was not moderated by secondary variables such as the addition of illusory auditory feedback, the presence of a virtual body reference, or three-dimensionality of the scene. Virtual reality can be used to augment perceived movement and body position, such that one can perform a small movement, yet perceive a large one. The MoOVi technique tested here has clear potential for assessment and therapy of people with spinal pain.

  14. Exploratory eye movements to pictures in childhood-onset schizophrenia and attention-deficit/hyperactivity disorder (ADHD).

    PubMed

    Karatekin, C; Asarnow, R F

    1999-02-01

    We investigated exploratory eye movements to thematic pictures in schizophrenic, attention-deficit/hyperactivity disorder (ADHD), and normal children. For each picture, children were asked three questions varying in amount of structure. We tested if schizophrenic children would stare or scan extensively and if their scan patterns were differentially affected by the question. Time spent viewing relevant and irrelevant regions, fixation duration (an estimate of processing rate), and distance between fixations (an estimate of breadth of attention) were measured. ADHD children showed a trend toward shorter fixations than normals on the question requiring the most detailed analysis. Schizophrenic children looked at fewer relevant, but not more irrelevant, regions than normals. They showed a tendency to stare more when asked to decide what was happening but not when asked to attend to specific regions. Thus, lower levels of visual attention (e.g., basic control of eye movements) were intact in schizophrenic children. In contrast, they had difficulty with top-down control of selective attention in the service of self-guided behavior.

  15. Neuromuscular strategies for lumbopelvic control during frontal and sagittal plane movement challenges differ between people with and without low back pain.

    PubMed

    Nelson-Wong, E; Poupore, K; Ingvalson, S; Dehmer, K; Piatte, A; Alexander, S; Gallant, P; McClenahan, B; Davis, A M

    2013-12-01

    Observation-based assessments of movement are a standard component in clinical assessment of patients with non-specific low back pain. While aberrant motion patterns can be detected visually, clinicians are unable to assess underlying neuromuscular strategies during these tests. The purpose of this study was to compare coordination of the trunk and hip muscles during 2 commonly used assessments for lumbopelvic control in people with low back pain (LBP) and matched control subjects. Electromyography was recorded from hip and trunk muscles of 34 participants (17 with LBP) during performance of the Active Hip Abduction (AHAbd) and Active Straight Leg Raise (ASLR) tests. Relative muscle timing was calculated using cross-correlation. Participants with LBP demonstrated a variable strategy, while control subjects used a consistent proximal to distal activation strategy during both frontal and sagittal plane movements. Findings from this study provide insight into underlying neuromuscular control during commonly used assessment tests for patients with LBP that may help to guide targeted intervention approaches. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. The Role of Eye Movement Driven Attention in Functional Strabismic Amblyopia

    PubMed Central

    2015-01-01

    Strabismic amblyopia “blunt vision” is a developmental anomaly that affects binocular vision and results in lowered visual acuity. Strabismus is a term for a misalignment of the visual axes and is usually characterized by impaired ability of the strabismic eye to take up fixation. Such impaired fixation is usually a function of the temporally and spatially impaired binocular eye movements that normally underlie binocular shifts in visual attention. In this review, we discuss how abnormal eye movement function in children with misaligned eyes influences the development of normal binocular visual attention and results in deficits in visual function such as depth perception. We also discuss how eye movement function deficits in adult amblyopia patients can also lead to other abnormalities in visual perception. Finally, we examine how the nonamblyopic eye of an amblyope is also affected in strabismic amblyopia. PMID:25838941

  17. Accounting for direction and speed of eye motion in planning visually guided manual tracking.

    PubMed

    Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe

    2013-10-01

    Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.

  18. The Effect of Intentional, Preplanned Movement on Novice Conductors' Gesture

    ERIC Educational Resources Information Center

    Bodnar, Erin N.

    2017-01-01

    Preplanning movement may be one way to broaden novice conductors' vocabulary of gesture and promote motor awareness. To test the difference between guided score study and guided score study with preplanned, intentional movement on the conducting gestures of novice conductors, undergraduate music education students (N = 20) were assigned to one of…

  19. Magnifying visual target information and the role of eye movements in motor sequence learning.

    PubMed

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Modulation of visually evoked movement responses in moving virtual environments.

    PubMed

    Reed-Jones, Rebecca J; Vallis, Lori Ann

    2009-01-01

    Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.

  1. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  2. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  3. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    PubMed Central

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  4. Color-Change Detection Activity in the Primate Superior Colliculus.

    PubMed

    Herman, James P; Krauzlis, Richard J

    2017-01-01

    The primate superior colliculus (SC) is a midbrain structure that participates in the control of spatial attention. Previous studies examining the role of the SC in attention have mostly used luminance-based visual features (e.g., motion, contrast) as the stimuli and saccadic eye movements as the behavioral response, both of which are known to modulate the activity of SC neurons. To explore the limits of the SC's involvement in the control of spatial attention, we recorded SC neuronal activity during a task using color, a visual feature dimension not traditionally associated with the SC, and required monkeys to detect threshold-level changes in the saturation of a cued stimulus by releasing a joystick during maintained fixation. Using this color-based spatial attention task, we found substantial cue-related modulation in all categories of visually responsive neurons in the intermediate layers of the SC. Notably, near-threshold changes in color saturation, both increases and decreases, evoked phasic bursts of activity with magnitudes as large as those evoked by stimulus onset. This change-detection activity had two distinctive features: activity for hits was larger than for misses, and the timing of change-detection activity accounted for 67% of joystick release latency, even though it preceded the release by at least 200 ms. We conclude that during attention tasks, SC activity denotes the behavioral relevance of the stimulus regardless of feature dimension and that phasic event-related SC activity is suitable to guide the selection of manual responses as well as saccadic eye movements.

  5. Solar System Symphony: Combining astronomy with live classical music

    NASA Astrophysics Data System (ADS)

    Kremer, Kyle; WorldWide Telescope

    2017-01-01

    Solar System Symphony is an educational outreach show which combines astronomy visualizations and live classical music. As musicians perform excerpts from Holst’s “The Planets” and other orchestral works, visualizations developed using WorldWide Telescope and NASA images and animations are projected on-stage. Between each movement of music, a narrator guides the audience through scientific highlights of the solar system. The content of Solar System Symphony is geared toward a general audience, particularly targeting K-12 students. The hour-long show not only presents a new medium for exposing a broad audience to astronomy, but also provides universities an effective tool for facilitating interdisciplinary collaboration between two divergent fields. The show was premiered at Northwestern University in May 2016 in partnership with Northwestern’s Bienen School of Music and was recently performed at the Colburn Conservatory of Music in November 2016.

  6. Changing motor perception by sensorimotor conflicts and body ownership

    PubMed Central

    Salomon, R.; Fernandez, N. B.; van Elk, M.; Vachicouras, N.; Sabatier, F.; Tychinskaya, A.; Llobera, J.; Blanke, O.

    2016-01-01

    Experimentally induced sensorimotor conflicts can result in a loss of the feeling of control over a movement (sense of agency). These findings are typically interpreted in terms of a forward model in which the predicted sensory consequences of the movement are compared with the observed sensory consequences. In the present study we investigated whether a mismatch between movements and their observed sensory consequences does not only result in a reduced feeling of agency, but may affect motor perception as well. Visual feedback of participants’ finger movements was manipulated using virtual reality to be anatomically congruent or incongruent to the performed movement. Participants made a motor perception judgment (i.e. which finger did you move?) or a visual perceptual judgment (i.e. which finger did you see moving?). Subjective measures of agency and body ownership were also collected. Seeing movements that were visually incongruent to the performed movement resulted in a lower accuracy for motor perception judgments, but not visual perceptual judgments. This effect was modified by rotating the virtual hand (Exp.2), but not by passively induced movements (Exp.3). Hence, sensorimotor conflicts can modulate the perception of one’s motor actions, causing viewed “alien actions” to be felt as one’s own. PMID:27225834

  7. Visual abilities in two raptors with different ecology.

    PubMed

    Potier, Simon; Bonadonna, Francesco; Kelber, Almut; Martin, Graham R; Isard, Pierre-François; Dulaurent, Thomas; Duriez, Olivier

    2016-09-01

    Differences in visual capabilities are known to reflect differences in foraging behaviour even among closely related species. Among birds, the foraging of diurnal raptors is assumed to be guided mainly by vision but their foraging tactics include both scavenging upon immobile prey and the aerial pursuit of highly mobile prey. We studied how visual capabilities differ between two diurnal raptor species of similar size: Harris's hawks, Parabuteo unicinctus, which take mobile prey, and black kites, Milvus migrans, which are primarily carrion eaters. We measured visual acuity, foveal characteristics and visual fields in both species. Visual acuity was determined using a behavioural training technique; foveal characteristics were determined using ultra-high resolution spectral-domain optical coherence tomography (OCT); and visual field parameters were determined using an ophthalmoscopic reflex technique. We found that these two raptors differ in their visual capacities. Harris's hawks have a visual acuity slightly higher than that of black kites. Among the five Harris's hawks tested, individuals with higher estimated visual acuity made more horizontal head movements before making a decision. This may reflect an increase in the use of monocular vision. Harris's hawks have two foveas (one central and one temporal), while black kites have only one central fovea and a temporal area. Black kites have a wider visual field than Harris's hawks. This may facilitate the detection of conspecifics when they are scavenging. These differences in the visual capabilities of these two raptors may reflect differences in the perceptual demands of their foraging behaviours. © 2016. Published by The Company of Biologists Ltd.

  8. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  9. Kinesthesis can make an invisible hand visible

    PubMed Central

    Dieter, Kevin C.; Hu, Bo; Knill, David C.; Blake, Randolph; Tadin, Duje

    2014-01-01

    Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown if our own actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one’s own hand in front of one’s covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that individuals with synesthesia experience substantially stronger kinesthesis-induced visual sensations. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants’ ability to smoothly eye-track self-generated hand movements in darkness, indicating that these sensations function like typical retinally-driven visual sensations. Evidently, even in the complete absence of external visual input, our brains predict visual consequences of our actions. PMID:24171930

  10. Visual field recovery after vision restoration therapy (VRT) is independent of eye movements: an eye tracker study.

    PubMed

    Kasten, Erich; Bunzenthal, Ulrike; Sabel, Bernhard A

    2006-11-25

    It has been argued that patients with visual field defects compensate for their deficit by making more frequent eye movements toward the hemianopic field and that visual field enlargements found after vision restoration therapy (VRT) may be an artefact of such eye movements. In order to determine if this was correct, we recorded eye movements in hemianopic subjects before and after VRT. Visual fields were measured in subjects with homonymous visual field defects (n=15) caused by trauma, cerebral ischemia or haemorrhage (lesion age >6 months). Visual field charts were plotted using both high-resolution perimetry (HRP) and conventional perimetry before and after a 3-month period of VRT, with eye movements being recorded with a 2D-eye tracker. This permitted quantification of eye positions and measurements of deviation from fixation. VRT lead to significant visual field enlargements as indicated by an increase of stimulus detection of 3.8% when tested using HRP and about 2.2% (OD) and 3.5% (OS) fewer misses with conventional perimetry. Eye movements were expressed as the standard deviations (S.D.) of the eye position recordings from fixation. Before VRT, the S.D. was +/-0.82 degrees horizontally and +/-1.16 degrees vertically; after VRT, it was +/-0.68 degrees and +/-1.39 degrees , respectively. A cluster analysis of the horizontal eye movements before VRT showed three types of subjects with (i) small (n=7), (ii) medium (n=7) or (iii) large fixation instability (n=1). Saccades were directed equally to the right or the left side; i.e., with no preference toward the blind hemifield. After VRT, many subjects showed a smaller variability of horizontal eye movements. Before VRT, 81.6% of the recorded eye positions were found within a range of 1 degrees horizontally from fixation, whereas after VRT, 88.3% were within that range. In the 2 degrees range, we found 94.8% before and 98.9% after VRT. Subjects moved their eyes 5 degrees or more 0.3% of the time before VRT versus 0.1% after VRT. Thus, in this study, subjects with homonymous visual field defects who were attempting to fixate a central target while their fields were being plotted, typically showed brief horizontal shifts with no preference toward or away from the blind hemifield. These eye movements were usually less than 1 degrees from fixation. Large saccades toward the blind field after VRT were very rare. VRT has no effect on either the direction or the amplitude of horizontal eye movements during visual field testing. These results argue against the theory that the visual field enlargements are artefacts induced by eye movements.

  11. VisualEyes: a modular software system for oculomotor experimentation.

    PubMed

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  12. Effect of visuomotor-map uncertainty on visuomotor adaptation.

    PubMed

    Saijo, Naoki; Gomi, Hiroaki

    2012-03-01

    Vision and proprioception contribute to generating hand movement. If a conflict between the visual and proprioceptive feedback of hand position is given, reaching movement is disturbed initially but recovers after training. Although previous studies have predominantly investigated the adaptive change in the motor output, it is unclear whether the contributions of visual and proprioceptive feedback controls to the reaching movement are modified by visuomotor adaptation. To investigate this, we focused on the change in proprioceptive feedback control associated with visuomotor adaptation. After the adaptation to gradually introduce visuomotor rotation, the hand reached the shifted position of the visual target to move the cursor to the visual target correctly. When the cursor feedback was occasionally eliminated (probe trial), the end point of the hand movement was biased in the visual-target direction, while the movement was initiated in the adapted direction, suggesting the incomplete adaptation of proprioceptive feedback control. Moreover, after the learning of uncertain visuomotor rotation, in which the rotation angle was randomly fluctuated on a trial-by-trial basis, the end-point bias in the probe trial increased, but the initial movement direction was not affected, suggesting a reduction in the adaptation level of proprioceptive feedback control. These results suggest that the change in the relative contribution of visual and proprioceptive feedback controls to the reaching movement in response to the visuomotor-map uncertainty is involved in visuomotor adaptation, whereas feedforward control might adapt in a manner different from that of the feedback control.

  13. Neurosurgical Applications of High-Intensity Focused Ultrasound with Magnetic Resonance Thermometry.

    PubMed

    Colen, Rivka R; Sahnoune, Iman; Weinberg, Jeffrey S

    2017-10-01

    Magnetic resonance guided focused ultrasound surgery (MRgFUS) has potential noninvasive effects on targeted tissue. MRgFUS integrates MRI and focused ultrasound surgery (FUS) into a single platform. MRI enables visualization of the target tissue and monitors ultrasound-induced effects in near real-time during FUS treatment. MRgFUS may serve as an adjunct or replace invasive surgery and radiotherapy for specific conditions. Its thermal effects ablate tumors in locations involved in movement disorders and essential tremors. Its nonthermal effects increase blood-brain barrier permeability to enhance delivery of therapeutics and other molecules. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Role of the cerebellum in reaching movements in humans. II. A neural model of the intermediate cerebellum.

    PubMed

    Schweighofer, N; Spoelstra, J; Arbib, M A; Kawato, M

    1998-01-01

    The cerebellum is essential for the control of multijoint movements; when the cerebellum is lesioned, the performance error is more than the summed errors produced by single joints. In the companion paper (Schweighofer et al., 1998), a functional anatomical model for visually guided arm movement was proposed. The model comprised a basic feedforward/feedback controller with realistic transmission delays and was connected to a two-link, six-muscle, planar arm. In the present study, we examined the role of the cerebellum in reaching movements by embedding a novel, detailed cerebellar neural network in this functional control model. We could derive realistic cerebellar inputs and the role of the cerebellum in learning to control the arm was assessed. This cerebellar network learned the part of the inverse dynamics of the arm not provided by the basic feedforward/feedback controller. Despite realistically low inferior olive firing rates and noisy mossy fibre inputs, the model could reduce the error between intended and planned movements. The responses of the different cell groups were comparable to those of biological cell groups. In particular, the modelled Purkinje cells exhibited directional tuning after learning and the parallel fibres, due to their length, provide Purkinje cells with the input required for this coordination task. The inferior olive responses contained two different components; the earlier response, locked to movement onset, was always present and the later response disappeared after learning. These results support the theory that the cerebellum is involved in motor learning.

  15. The effect of contextual cues on the encoding of motor memories.

    PubMed

    Howard, Ian S; Wolpert, Daniel M; Franklin, David W

    2013-05-01

    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues.

  16. On the Adaptation of Pelvic Motion by Applying 3-dimensional Guidance Forces Using TPAD.

    PubMed

    Kang, Jiyeon; Vashista, Vineet; Agrawal, Sunil K

    2017-09-01

    Pelvic movement is important to human locomotion as the center of mass is located near the center of pelvis. Lateral pelvic motion plays a crucial role to shift the center of mass on the stance leg, while swinging the other leg and keeping the body balanced. In addition, vertical pelvic movement helps to reduce metabolic energy expenditure by exchanging potential and kinetic energy during the gait cycle. However, patient groups with cerebral palsy or stroke have excessive pelvic motion that leads to high energy expenditure. In addition, they have higher chances of falls as the center ofmass could deviate outside the base of support. In this paper, a novel control method is suggested using tethered pelvic assist device (TPAD) to teach subjects to walk with a specified target pelvic trajectory while walking on a treadmill. In this method, a force field is applied to the pelvis to guide it to move on a target trajectory and correctional forces are applied, if the pelvis motion has excessive deviations from the target trajectory. Three different experimentswith healthy subjects were conducted to teach them to walk on a new target pelvic trajectory with the presented control method. For all three experiments, the baseline trajectory of the pelvis was experimentally determined for each participating subject. To design a target pelvic trajectory which is different from the baseline, Experiment I scaled up the lateral component of the baseline pelvic trajectory, while Experiment II scaled down the lateral component of the baseline trajectory. For both Experiments I and II, the controller generated a 2-D force field in the transverse plane to provide the guidance force. In this paper, seven subjects were recruited for each experiment who walked on the treadmill with suggested control methods and visual feedback of their pelvic trajectory. The results show that the subjects were able to learn the target pelvic trajectory in each experiment and also retained the training effects after the completion of the experiment. In Experiment III, both lateral and vertical components of the pelvic trajectory were scaled down from the baseline trajectory. The force field was extended to three dimensions in order to correct the vertical pelvic movement as well. Three subgroups (force feedback alone, visual feedback alone, and both force and visual feedback) were recruited to understand the effects of force feedback and visual feedback alone to distinguish the results from Experiments I and II. The results showthat a trainingmethod that combines visual and force feedback is superior to the training methods with visual or force feedback alone. We believe that the present control strategy holds potential in training and correcting abnormal pelvic movements in different patient populations.

  17. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon.

    PubMed

    Puckett, Yana; Baronia, Benedicto C

    2016-09-20

    With the recent advances in eye tracking technology, it is now possible to track surgeons' eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis.

  18. Lateralization of visually guided detour behaviour in the common chameleon, Chamaeleo chameleon, a reptile with highly independent eye movements.

    PubMed

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2013-11-01

    Chameleons (Chamaeleonidae, reptilia), in common with most ectotherms, show full optic nerve decussation and sparse inter-hemispheric commissures. Chameleons are unique in their capacity for highly independent, large-amplitude eye movements. We address the question: Do common chameleons, Chamaeleo chameleon, during detour, show patterns of lateralization of motion and of eye use that differ from those shown by other ectotherms? To reach a target (prey) in passing an obstacle in a Y-maze, chameleons were required to make a left or a right detour. We analyzed the direction of detours and eye use and found that: (i) individuals differed in their preferred detour direction, (ii) eye use was lateralized at the group level, with significantly longer durations of viewing the target with the right eye, compared with the left eye, (iii) during left side, but not during right side, detours the durations of viewing the target with the right eye were significantly longer than the durations with the left eye. Thus, despite the uniqueness of chameleons' visual system, they display patterns of lateralization of motion and of eye use, typical of other ectotherms. These findings are discussed in relation to hemispheric functions. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Quantifying patterns of dynamics in eye movement to measure goodness in organization of design elements in interior architecture

    NASA Astrophysics Data System (ADS)

    Mirkia, Hasti; Sangari, Arash; Nelson, Mark; Assadi, Amir H.

    2013-03-01

    Architecture brings together diverse elements to enhance the observer's measure of esthetics and the convenience of functionality. Architects often conceptualize synthesis of design elements to invoke the observer's sense of harmony and positive affect. How does an observer's brain respond to harmony of design in interior spaces? One implicit consideration by architects is the role of guided visual attention by observers while navigating indoors. Prior visual experience of natural scenes provides the perceptual basis for Gestalt of design elements. In contrast, Gestalt of organization in design varies according to the architect's decision. We outline a quantitative theory to measure the success in utilizing the observer's psychological factors to achieve the desired positive affect. We outline a unified framework for perception of geometry and motion in interior spaces, which integrates affective and cognitive aspects of human vision in the context of anthropocentric interior design. The affective criteria are derived from contemporary theories of interior design. Our contribution is to demonstrate that the neural computations in an observer's eye movement could be used to elucidate harmony in perception of form, space and motion, thus a measure of goodness of interior design. Through mathematical modeling, we argue the plausibility of the relevant hypotheses.

  20. Ultrasound-guided platelet-rich plasma injection for distal biceps tendinopathy

    PubMed Central

    Bell, Simon N; Connell, David; Coghlan, Jennifer A

    2015-01-01

    Background Distal biceps tendinopathy is an uncommon cause of elbow pain. The optimum treatment for cases refractory to conservative treatment is unclear. Platelet-rich plasma has been used successfully for other tendinopathies around the elbow. Methods Six patients with clinical and radiological evidence of distal biceps tendinopathy underwent ultrasound-guided platelet-rich plasma (PRP) injection. Clinical examination findings, visual analogue score (VAS) for pain and Mayo Elbow Performance scores were recorded. Results The Mayo Elbow Performance Score improved from 68.3 (range 65 to 85) (fair function) to 95 (range 85 to 100) (excellent function). The VAS at rest improved from a mean of 2.25 (range 2 to 5) pre-injection to 0. The VAS with movement improved from a mean of 7.25 (range 5 to 8) pre-injection to 1.3 (range 0 to 2). No complications were noted. Discussion Ultrasound-guided PRP injection appears to be a safe and effective treatment for recalcitrant cases of distal biceps tendinopathy. Further investigation with a randomized controlled trial is needed to fully assess its efficacy. PMID:27582965

  1. Shade determination using camouflaged visual shade guides and an electronic spectrophotometer.

    PubMed

    Kvalheim, S F; Øilo, M

    2014-03-01

    The aim of the present study was to compare a camouflaged visual shade guide to a spectrophotometer designed for restorative dentistry. Two operators performed analyses of 66 subjects. One central upper incisor was measured four times by each operator; twice with a camouflaged visual shade guide and twice with a spectrophotometer Both methods had acceptable repeatability rates, but the electronic shade determination showed higher repeatability. In general, the electronically determined shades were darker than the visually determined shades. The use of a camouflaged visual shade guide seems to be an adequate method to reduce operator bias.

  2. Accuracy of visual estimates of joint angle and angular velocity using criterion movements.

    PubMed

    Morrison, Craig S; Knudson, Duane; Clayburn, Colby; Haywood, Philip

    2005-06-01

    A descriptive study to document undergraduate physical education majors' (22.8 +/- 2.4 yr. old) estimates of sagittal plane elbow angle and angular velocity of elbow flexion visually was performed. 42 subjects rated videotape replays of 30 movements organized into three speeds of movement and two criterion elbow angles. Video images of the movements were analyzed with Peak Motus to measure actual values of elbow angles and peak angular velocity. Of the subjects 85.7% had speed ratings significantly correlated with true peak elbow angular velocity in all three angular velocity conditions. Few (16.7%) subjects' ratings of elbow angle correlated significantly with actual angles. Analysis of the subjects with good ratings showed the accuracy of visual ratings was significantly related to speed, with decreasing accuracy for slower speeds of movement. The use of criterion movements did not improve the small percentage of novice observers who could accurately estimate body angles during movement.

  3. Lip movements entrain the observers’ low-frequency brain oscillations to facilitate speech intelligibility

    PubMed Central

    Park, Hyojin; Kayser, Christoph; Thut, Gregor; Gross, Joachim

    2016-01-01

    During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing. DOI: http://dx.doi.org/10.7554/eLife.14521.001 PMID:27146891

  4. The contribution of LM to the neuroscience of movement vision

    PubMed Central

    Zihl, Josef; Heywood, Charles A.

    2015-01-01

    The significance of early and sporadic reports in the 19th century of impairments of motion vision following brain damage was largely unrecognized. In the absence of satisfactory post-mortem evidence, impairments were interpreted as the consequence of a more general disturbance resulting from brain damage, the location and extent of which was unknown. Moreover, evidence that movement constituted a special visual perception and may be selectively spared was similarly dismissed. Such skepticism derived from a reluctance to acknowledge that the neural substrates of visual perception may not be confined to primary visual cortex. This view did not persist. First, it was realized that visual movement perception does not depend simply on the analysis of spatial displacements and temporal intervals, but represents a specific visual movement sensation. Second persuasive evidence for functional specialization in extrastriate cortex, and notably the discovery of cortical area V5/MT, suggested a separate region specialized for motion processing. Shortly thereafter the remarkable case of patient LM was published, providing compelling evidence for a selective and specific loss of movement vision. The case is reviewed here, along with an assessment of its contribution to visual neuroscience. PMID:25741251

  5. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    PubMed

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  6. Comparison of visual sensitivity to human and object motion in autism spectrum disorder.

    PubMed

    Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie

    2010-08-01

    Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.

  7. The Neural Basis of Mark Making: A Functional MRI Study of Drawing

    PubMed Central

    Yuan, Ye; Brown, Steven

    2014-01-01

    Compared to most other forms of visually-guided motor activity, drawing is unique in that it “leaves a trail behind” in the form of the emanating image. We took advantage of an MRI-compatible drawing tablet in order to examine both the motor production and perceptual emanation of images. Subjects participated in a series of mark making tasks in which they were cued to draw geometric patterns on the tablet's surface. The critical comparison was between when visual feedback was displayed (image generation) versus when it was not (no image generation). This contrast revealed an occipito-parietal stream involved in motion-based perception of the emerging image, including areas V5/MT+, LO, V3A, and the posterior part of the intraparietal sulcus. Interestingly, when subjects passively viewed animations of visual patterns emerging on the projected surface, all of the sensorimotor network involved in drawing was strongly activated, with the exception of the primary motor cortex. These results argue that the origin of the human capacity to draw and write involves not only motor skills for tool use but also motor-sensory links between drawing movements and the visual images that emanate from them in real time. PMID:25271440

  8. Relationship of ocular accommodation and motor skills performance in developmental coordination disorder.

    PubMed

    Rafique, Sara A; Northway, Nadia

    2015-08-01

    Ocular accommodation provides a well-focussed image, feedback for accurate eye movement control, and cues for depth perception. To accurately perform visually guided motor tasks, integration of ocular motor systems is essential. Children with motor coordination impairment are established to be at higher risk of accommodation anomalies. The aim of the present study was to examine the relationship between ocular accommodation and motor tasks, which are often overlooked, in order to better understand the problems experienced by children with motor coordination impairment. Visual function, gross and fine motor skills were assessed in children with developmental coordination disorder (DCD) and typically developing control children. Children with DCD had significantly poorer accommodation facility and amplitude dynamics compared to controls. Results indicate a relationship between impaired accommodation and motor skills. Specifically, accommodation anomalies correlated with visual motor, upper limb and fine dexterity task performance. Consequently, we argue accommodation anomalies influence the ineffective coordination of action and perception in DCD. Furthermore, reading disabilities were related to poorer motor performance. We postulate the role of the fastigial nucleus as a common pathway for accommodation and motor deficits. Implications of the findings and recommended visual screening protocols are discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Understanding the function of visual short-term memory: transsaccadic memory, object correspondence, and gaze correction.

    PubMed

    Hollingworth, Andrew; Richard, Ashleigh M; Luck, Steven J

    2008-02-01

    Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here, the authors demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. The authors hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a task faced by the visual system thousands of times each day. In 4 experiments, memory-based gaze correction was accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load interfered with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate that VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  10. Eye movement accuracy determines natural interception strategies.

    PubMed

    Fooken, Jolande; Yeo, Sang-Hoon; Pai, Dinesh K; Spering, Miriam

    2016-11-01

    Eye movements aid visual perception and guide actions such as reaching or grasping. Most previous work on eye-hand coordination has focused on saccadic eye movements. Here we show that smooth pursuit eye movement accuracy strongly predicts both interception accuracy and the strategy used to intercept a moving object. We developed a naturalistic task in which participants (n = 42 varsity baseball players) intercepted a moving dot (a "2D fly ball") with their index finger in a designated "hit zone." Participants were instructed to track the ball with their eyes, but were only shown its initial launch (100-300 ms). Better smooth pursuit resulted in more accurate interceptions and determined the strategy used for interception, i.e., whether interception was early or late in the hit zone. Even though early and late interceptors showed equally accurate interceptions, they may have relied on distinct tactics: early interceptors used cognitive heuristics, whereas late interceptors' performance was best predicted by pursuit accuracy. Late interception may be beneficial in real-world tasks as it provides more time for decision and adjustment. Supporting this view, baseball players who were more senior were more likely to be late interceptors. Our findings suggest that interception strategies are optimally adapted to the proficiency of the pursuit system.

  11. A New Control System Software for SANS BATAN Spectrometer in Serpong, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharoto; Putra, Edy Giri Rachman

    2010-06-22

    The original main control system of the 36 meter small-angle neutron scattering (SANS) BATAN Spectrometer (SMARTer) has been replaced with the new ones due to the malfunction of the main computer. For that reason, a new control system software for handling all the control systems was also developed in order to put the spectrometer back in operation. The developed software is able to control the system such as rotation movement of six pinholes system, vertical movement of four neutron guide system with the total length of 16.5 m, two-directional movement of a neutron beam stopper, forward-backward movement of a 2Dmore » position sensitive detector (2D-PSD) along 16.7 m, etc. A Visual Basic language program running on Windows operating system was employed to develop the software and it can be operated by other remote computers in the local area network. All device positions and command menu are displayed graphically in the main monitor or window and each device control can be executed by clicking the control button. Those advantages are necessary required for developing a new user-friendly control system software. Finally, the new software has been tested for handling a complete SANS experiment and it works properly.« less

  12. A New Control System Software for SANS BATAN Spectrometer in Serpong, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bharoto,; Putra, Edy Giri Rachman

    2010-06-22

    The original main control system of the 36 meter small‐angle neutron scattering (SANS) BATAN Spectrometer (SMARTer) has been replaced with the new ones due to the malfunction of the main computer. For that reason, a new control system software for handling all the control systems was also developed in order to put the spectrometer back in operation. The developed software is able to control the system such as rotation movement of six pinholes system, vertical movement of four neutron guide system with the total length of 16.5 m, two‐directional movement of a neutron beam stopper, forward‐backward movement of a 2Dmore » position sensitive detector (2D‐PSD) along 16.7 m, etc. A Visual Basic language program running on Windows operating system was employed to develop the software and it can be operated by other remote computers in the local area network. All device positions and command menu are displayed graphically in the main monitor or window and each device control can be executed by clicking the control button. Those advantages are necessary required for developing a new user‐friendly control system software. Finally, the new software has been tested for handling a complete SANS experiment and it works properly.« less

  13. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  14. Context effects on smooth pursuit and manual interception of a disappearing target.

    PubMed

    Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam

    2017-07-01

    In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.

  15. Effects of Reduced Acuity and Stereo Acuity on Saccades and Reaching Movements in Adults With Amblyopia and Strabismus.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Colpa, Linda; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2017-02-01

    Our previous work has shown that amblyopia disrupts the planning and execution of visually-guided saccadic and reaching movements. We investigated the association between the clinical features of amblyopia and aspects of visuomotor behavior that are disrupted by amblyopia. A total of 55 adults with amblyopia (22 anisometropic, 18 strabismic, 15 mixed mechanism), 14 adults with strabismus without amblyopia, and 22 visually-normal control participants completed a visuomotor task while their eye and hand movements were recorded. Univariate and multivariate analyses were performed to assess the association between three clinical predictors of amblyopia (amblyopic eye [AE] acuity, stereo sensitivity, and eye deviation) and seven kinematic outcomes, including saccadic and reach latency, interocular saccadic and reach latency difference, saccadic and reach precision, and PA/We ratio (an index of reach control strategy efficacy using online feedback correction). Amblyopic eye acuity explained 28% of the variance in saccadic latency, and 48% of the variance in mean saccadic latency difference between the amblyopic and fellow eyes (i.e., interocular latency difference). In contrast, for reach latency, AE acuity explained only 10% of the variance. Amblyopic eye acuity was associated with reduced endpoint saccadic (23% of variance) and reach (22% of variance) precision in the amblyopic group. In the strabismus without amblyopia group, stereo sensitivity and eye deviation did not explain any significant variance in saccadic and reach latency or precision. Stereo sensitivity was the best clinical predictor of deficits in reach control strategy, explaining 23% of total variance of PA/We ratio in the amblyopic group and 12% of variance in the strabismus without amblyopia group when viewing with the amblyopic/nondominant eye. Deficits in eye and limb movement initiation (latency) and target localization (precision) were associated with amblyopic acuity deficit, whereas changes in the sensorimotor reach strategy were associated with deficits in stereopsis. Importantly, more than 50% of variance was not explained by the measured clinical features. Our findings suggest that other factors, including higher order visual processing and attention, may have an important role in explaining the kinematic deficits observed in amblyopia.

  16. Movement Perception and Movement Production in Asperger's Syndrome

    ERIC Educational Resources Information Center

    Price, Kelly J.; Shiffrar, Maggie; Kerns, Kimberly A.

    2012-01-01

    To determine whether motor difficulties documented in Asperger's Syndrome (AS) are related to compromised visual abilities, this study examined perception and movement in response to dynamic visual environments. Fourteen males with AS and 16 controls aged 7-23 completed measures of motor skills, postural response to optic flow, and visual…

  17. [Comprehensive testing system for cardiorespiratory interaction research].

    PubMed

    Zhang, Zhengbo; Wang, Buqing; Wang, Weidong; Zheng, Jiewen; Liu, Hongyun; Li, Kaiyuan; Sun, Congcong; Wang, Guojing

    2013-04-01

    To investigate the modulation effects of breathing movement on cardiovascular system and to study the physiological coupling relationship between respiration and cardiovascular system, we designed a comprehensive testing system for cardiorespiratory interaction research. This system, comprising three parts, i. e. physiological signal conditioning unit, data acquisition and USB medical isolation unit, and a PC based program, can acquire multiple physiological data such as respiratory flow, rib cage and abdomen movement, electrocardiograph, artery pulse wave, cardiac sounds, skin temperature, and electromyography simultaneously under certain experimental protocols. Furthermore this system can be used in research on short-term cardiovascular variability by paced breathing. Preliminary experiments showed that this system could accurately record rib cage and abdomen movement under very low breathing rate, using respiratory inductive plethysmography to acquire respiration signal in direct-current coupling mode. After calibration, this system can be used to estimate ventilation non-intrusively and correctly. The PC based program can generate audio and visual biofeedback signal, and guide the volunteers to perform a slow and regular breathing. An experiment on healthy volunteers showed that this system was able to guide the volunteers to do slow breathing effectively and simultaneously record multiple physiological data during the experiments. Signal processing techniques were used for off-line data analysis, such as non-invasive ventilation calibration, QRS complex wave detection, and respiratory sinus arrhythmia and pulse wave transit time calculation. The experiment result showed that the modulation effect on RR interval, respiratory sinus arrhythmia (RSA), pulse wave transit time (PWTT) by respiration would get stronger with the going of the slow and regular breathing.

  18. Moving to Learn: How Guiding the Hands Can Set the Stage for Learning

    ERIC Educational Resources Information Center

    Brooks, Neon; Goldin-Meadow, Susan

    2016-01-01

    Previous work has found that guiding problem-solvers' movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner's movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children…

  19. Comparison of accuracies of an intraoral spectrophotometer and conventional visual method for shade matching using two shade guide systems.

    PubMed

    Parameswaran, Vidhya; Anilkumar, S; Lylajam, S; Rajesh, C; Narayan, Vivek

    2016-01-01

    This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome.

  20. Robotic guidance benefits the learning of dynamic, but not of spatial movement characteristics.

    PubMed

    Lüttgen, Jenna; Heuer, Herbert

    2012-10-01

    Robotic guidance is an engineered form of haptic-guidance training and intended to enhance motor learning in rehabilitation, surgery, and sports. However, its benefits (and pitfalls) are still debated. Here, we investigate the effects of different presentation modes on the reproduction of a spatiotemporal movement pattern. In three different groups of participants, the movement was demonstrated in three different modalities, namely visual, haptic, and visuo-haptic. After demonstration, participants had to reproduce the movement in two alternating recall conditions: haptic and visuo-haptic. Performance of the three groups during recall was compared with regard to spatial and dynamic movement characteristics. After haptic presentation, participants showed superior dynamic accuracy, whereas after visual presentation, participants performed better with regard to spatial accuracy. Added visual feedback during recall always led to enhanced performance, independent of the movement characteristic and the presentation modality. These findings substantiate the different benefits of different presentation modes for different movement characteristics. In particular, robotic guidance is beneficial for the learning of dynamic, but not of spatial movement characteristics.

  1. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  2. TopoDrive and ParticleFlow--Two Computer Models for Simulation and Visualization of Ground-Water Flow and Transport of Fluid Particles in Two Dimensions

    USGS Publications Warehouse

    Hsieh, Paul A.

    2001-01-01

    This report serves as a user?s guide for two computer models: TopoDrive and ParticleFlow. These two-dimensional models are designed to simulate two ground-water processes: topography-driven flow and advective transport of fluid particles. To simulate topography-driven flow, the user may specify the shape of the water table, which bounds the top of the vertical flow section. To simulate transport of fluid particles, the model domain is a rectangle with overall flow from left to right. In both cases, the flow is under steady state, and the distribution of hydraulic conductivity may be specified by the user. The models compute hydraulic head, ground-water flow paths, and the movement of fluid particles. An interactive visual interface enables the user to easily and quickly explore model behavior, and thereby better understand ground-water flow processes. In this regard, TopoDrive and ParticleFlow are not intended to be comprehensive modeling tools, but are designed for modeling at the exploratory or conceptual level, for visual demonstration, and for educational purposes.

  3. Invertebrate neurobiology: visual direction of arm movements in an octopus.

    PubMed

    Niven, Jeremy E

    2011-03-22

    An operant task in which octopuses learn to locate food by a visual cue in a three-choice maze shows that they are capable of integrating visual and mechanosensory information to direct their arm movements to a goal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. The neural circuits recruited for the production of signs and fingerspelled words

    PubMed Central

    Emmorey, Karen; Mehta, Sonya; McCullough, Stephen; Grabowski, Thomas J.

    2016-01-01

    Signing differs from typical non-linguistic hand actions because movements are not visually guided, finger movements are complex (particularly for fingerspelling), and signs are not produced as holistic gestures. We used positron emission tomography to investigate the neural circuits involved in the production of American Sign Language (ASL). Different types of signs (one-handed (articulated in neutral space), two-handed (neutral space), and one-handed body-anchored signs) were elicited by asking deaf native signers to produce sign translations of English words. Participants also fingerspelled (one-handed) printed English words. For the baseline task, participants indicated whether a word contained a descending letter. Fingerspelling engaged ipsilateral motor cortex and cerebellar cortex in contrast to both one-handed signs and the descender baseline task, which may reflect greater timing demands and complexity of handshape sequences required for fingerspelling. Greater activation in the visual word form area was also observed for fingerspelled words compared to one-handed signs. Body-anchored signs engaged bilateral superior parietal cortex to a greater extent than the descender baseline task and neutral space signs, reflecting the motor control and proprioceptive monitoring required to direct the hand toward a specific location on the body. Less activation in parts of the motor circuit was observed for two-handed signs compared to one-handed signs, possibly because, for half of the signs, handshape and movement goals were spread across the two limbs. Finally, the conjunction analysis comparing each sign type with the descender baseline task revealed common activation in the supramarginal gyrus bilaterally, which we interpret as reflecting phonological retrieval and encoding processes. PMID:27459390

  5. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  6. Classification of visual and linguistic tasks using eye-movement features.

    PubMed

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  7. Plastic Bags and Environmental Pollution

    ERIC Educational Resources Information Center

    Sang, Anita Ng Heung

    2010-01-01

    The "Hong Kong Visual Arts Curriculum Guide," covering Primary 1 to Secondary 3 grades (Curriculum Development Committee, 2003), points to three domains of learning in visual arts: (1) visual arts knowledge; (2) visual arts appreciation and criticism; and (3) visual arts making. The "Guide" suggests learning should develop…

  8. A Vision-Based Wayfinding System for Visually Impaired People Using Situation Awareness and Activity-Based Instructions

    PubMed Central

    Kim, Eun Yi

    2017-01-01

    A significant challenge faced by visually impaired people is ‘wayfinding’, which is the ability to find one’s way to a destination in an unfamiliar environment. This study develops a novel wayfinding system for smartphones that can automatically recognize the situation and scene objects in real time. Through analyzing streaming images, the proposed system first classifies the current situation of a user in terms of their location. Next, based on the current situation, only the necessary context objects are found and interpreted using computer vision techniques. It estimates the motions of the user with two inertial sensors and records the trajectories of the user toward the destination, which are also used as a guide for the return route after reaching the destination. To efficiently convey the recognized results using an auditory interface, activity-based instructions are generated that guide the user in a series of movements along a route. To assess the effectiveness of the proposed system, experiments were conducted in several indoor environments: the sit in which the situation awareness accuracy was 90% and the object detection false alarm rate was 0.016. In addition, our field test results demonstrate that users can locate their paths with an accuracy of 97%. PMID:28813033

  9. The influence of visual motion on interceptive actions and perception.

    PubMed

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Head Rotation Movement Times.

    PubMed

    Hoffmann, Errol R; Chan, Alan H S; Heung, P T

    2017-09-01

    The aim of this study was to measure head rotation movement times in a Fitts' paradigm and to investigate the transition region from ballistic movements to visually controlled movements as the task index of difficulty (ID) increases. For head rotation, there are gaps in the knowledge of the effects of movement amplitude and task difficulty around the critical transition region from ballistic movements to visually controlled movements. Under the conditions of 11 ID values (from 1.0 to 6.0) and five movement amplitudes (20° to 60°), participants performed a head rotation task, and movement times were measured. Both the movement amplitude and task difficulty have effects on movement times at low IDs, but movement times are dependent only on ID at higher ID values. Movement times of participants are higher than for arm/hand movements, for both ballistic and visually controlled movements. The information-processing rate of head rotational movements, at high ID values, is about half that of arm movements. As an input mode, head rotations are not as efficient as the arm system either in ability to use rapid ballistic movements or in the rate at which information may be processed. The data of this study add to those in the review of Hoffmann for the critical IDs of different body motions. The data also allow design for the best arrangement of display that is under the design constraints of limited display area and difficulty of head-controlled movements in a data-inputting task.

  11. Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.

    ERIC Educational Resources Information Center

    Chun, Marvin M.; Jiang, Yuhong

    1998-01-01

    Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)

  12. Influence of Ankle Active Dorsiflexion Movement Guided by Inspiration on the Venous Return From the Lower Limbs: A Prospective Study.

    PubMed

    Pi, Hongying; Ku, Hong'an; Zhao, Ting; Wang, Jie; Fu, Yicheng

    2018-04-01

    Active ankle movement is recommended intervention for preventing deep vein thrombosis effectively and easily by promoting venous return from the lower limbs. The active ankle dorsiflexion and plantar flexion movement guided by deep breathing is considered the most effective method, although outstanding problems remain, including low patient compliance and difficult motion essentials. The aims of this study were to compare the influence of different ankle active movements on venous return from the lower limbs and to suggest the optimal movement for preventing deep venous thrombosis in the lower limbs. A self-controlled study on 130 subjects was undertaken. The femoral venous hemodynamics of the left femoral vein and changes in pulse oxygen saturation and heart rate were compared among the three states of quiescent, active ankle 30° dorsiflexion movement, and active ankle 30° dorsiflexion with active plantar 45° flexion movement. The immediate master rates of the two ankle movements were examined before the study. The femoral venous hemodynamics of the left femoral vein were significantly higher in both movement states compared with the quiescent state. Moreover, no significant difference was found among the three states in terms of pulse oxygen saturation and heart rate. The immediate master rate was significantly higher in the active ankle 30° dorsiflexion movement than in the active ankle 30° dorsiflexion and active plantar 45° flexion movement. Therefore, active ankle 30° dorsiflexion movement guided by inspiration was found in this study to increase femoral venous hemodynamics, which heightened the immediate master rate but had no obvious influence on pulse oxygen saturation and heart rate. Active ankle 30° dorsiflexion movement guided by inspiration effectively promotes venous return from the lower limbs and is a better method to prevent deep vein thrombosis of the lower limbs.

  13. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  14. Difference in Visual Processing Assessed by Eye Vergence Movements

    PubMed Central

    Solé Puig, Maria; Puigcerver, Laura; Aznar-Casanova, J. Antonio; Supèr, Hans

    2013-01-01

    Orienting visual attention is closely linked to the oculomotor system. For example, a shift of attention is usually followed by a saccadic eye movement and can be revealed by micro saccades. Recently we reported a novel role of another type of eye movement, namely eye vergence, in orienting visual attention. Shifts in visuospatial attention are characterized by the response modulation to a selected target. However, unlike (micro-) saccades, eye vergence movements do not carry spatial information (except for depth) and are thus not specific to a particular visual location. To further understand the role of eye vergence in visual attention, we tested subjects with different perceptual styles. Perceptual style refers to the characteristic way individuals perceive environmental stimuli, and is characterized by a spatial difference (local vs. global) in perceptual processing. We tested field independent (local; FI) and field dependent (global; FD) observers in a cue/no-cue task and a matching task. We found that FI observers responded faster and had stronger modulation in eye vergence in both tasks than FD subjects. The results may suggest that eye vergence modulation may relate to the trade-off between the size of spatial region covered by attention and the processing efficiency of sensory information. Alternatively, vergence modulation may have a role in the switch in cortical state to prepare the visual system for new incoming sensory information. In conclusion, vergence eye movements may be added to the growing list of functions of fixational eye movements in visual perception. However, further studies are needed to elucidate its role. PMID:24069140

  15. A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements.

    PubMed

    Hauswald, Anne; Lithari, Chrysa; Collignon, Olivier; Leonardelli, Elisa; Weisz, Nathan

    2018-05-07

    Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex-that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3-5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects

    PubMed Central

    Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro

    2011-01-01

    During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649

  17. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon

    PubMed Central

    Baronia, Benedicto C

    2016-01-01

    With the recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis. PMID:27774359

  18. Deciding Which Way to Go: How Do Insects Alter Movements to Negotiate Barriers?

    PubMed Central

    Ritzmann, Roy E.; Harley, Cynthia M.; Daltorio, Kathryn A.; Tietz, Brian R.; Pollack, Alan J.; Bender, John A.; Guo, Peiyuan; Horomanski, Audra L.; Kathman, Nicholas D.; Nieuwoudt, Claudia; Brown, Amy E.; Quinn, Roger D.

    2012-01-01

    Animals must routinely deal with barriers as they move through their natural environment. These challenges require directed changes in leg movements and posture performed in the context of ever changing internal and external conditions. In particular, cockroaches use a combination of tactile and visual information to evaluate objects in their path in order to effectively guide their movements in complex terrain. When encountering a large block, the insect uses its antennae to evaluate the object’s height then rears upward accordingly before climbing. A shelf presents a choice between climbing and tunneling that depends on how the antennae strike the shelf; tapping from above yields climbing, while tapping from below causes tunneling. However, ambient light conditions detected by the ocelli can bias that decision. Similarly, in a T-maze turning is determined by antennal contact but influenced by visual cues. These multi-sensory behaviors led us to look at the central complex as a center for sensori-motor integration within the insect brain. Visual and antennal tactile cues are processed within the central complex and, in tethered preparations, several central complex units changed firing rates in tandem with or prior to altered step frequency or turning, while stimulation through the implanted electrodes evoked these same behavioral changes. To further test for a central complex role in these decisions, we examined behavioral effects of brain lesions. Electrolytic lesions in restricted regions of the central complex generated site specific behavioral deficits. Similar changes were also found in reversible effects of procaine injections in the brain. Finally, we are examining these kinds of decisions made in a large arena that more closely matches the conditions under which cockroaches forage. Overall, our studies suggest that CC circuits may indeed influence the descending commands associated with navigational decisions, thereby making them more context dependent. PMID:22783160

  19. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Objective Analysis of Performance of Activities of Daily Living in People With Central Field Loss.

    PubMed

    Pardhan, Shahina; Latham, Keziah; Tabrett, Daryl; Timmis, Matthew A

    2015-11-01

    People with central visual field loss (CFL) adopt various strategies to complete activities of daily living (ADL). Using objective movement analysis, we compared how three ADLs were completed by people with CFL compared with age-matched, visually healthy individuals. Fourteen participants with CFL (age 81 ± 10 years) and 10 age-matched, visually healthy (age 75 ± 5 years) participated. Three ADLs were assessed: pick up food from a plate, pour liquid from a bottle, and insert a key in a lock. Participants with CFL completed each ADL habitually (as they would in their home). Data were compared with visually healthy participants who were asked to complete the tasks as they would normally, but under specified experimental conditions. Movement kinematics were compared using three-dimension motion analysis (Vicon). Visual functions (distance and near acuities, contrast sensitivity, visual fields) were recorded. All CFL participants were able to complete each ADL. However, participants with CFL demonstrated significantly (P < 0.05) longer overall movement times, shorter minimum viewing distance, and, for two of the three ADL tasks, needed more online corrections in the latter part of the movement. Results indicate that, despite the adoption of various habitual strategies, participants with CFL still do not perform common daily living tasks as efficiently as healthy subjects. Although indices suggesting feed-forward planning are similar, they made more movement corrections and increased time for the latter portion of the action, indicating a more cautious/uncertain approach. Various kinematic indices correlated significantly to visual function parameters including visual acuity and midperipheral visual field loss.

  1. Similar brain networks for detecting visuo-motor and visuo-proprioceptive synchrony.

    PubMed

    Balslev, Daniela; Nielsen, Finn A; Lund, Torben E; Law, Ian; Paulson, Olaf B

    2006-05-15

    The ability to recognize feedback from own movement as opposed to the movement of someone else is important for motor control and social interaction. The neural processes involved in feedback recognition are incompletely understood. Two competing hypotheses have been proposed: the stimulus is compared with either (a) the proprioceptive feedback or with (b) the motor command and if they match, then the external stimulus is identified as feedback. Hypothesis (a) predicts that the neural mechanisms or brain areas involved in distinguishing self from other during passive and active movement are similar, whereas hypothesis (b) predicts that they are different. In this fMRI study, healthy subjects saw visual cursor movement that was either synchronous or asynchronous with their active or passive finger movements. The aim was to identify the brain areas where the neural activity depended on whether the visual stimulus was feedback from own movement and to contrast the functional activation maps for active and passive movement. We found activity increases in the right temporoparietal cortex in the condition with asynchronous relative to synchronous visual feedback from both active and passive movements. However, no statistically significant difference was found between these sets of activated areas when the active and passive movement conditions were compared. With a posterior probability of 0.95, no brain voxel had a contrast effect above 0.11% of the whole-brain mean signal. These results do not support the hypothesis that recognition of visual feedback during active and passive movement relies on different brain areas.

  2. Comparison of accuracies of an intraoral spectrophotometer and conventional visual method for shade matching using two shade guide systems

    PubMed Central

    Parameswaran, Vidhya; Anilkumar, S.; Lylajam, S.; Rajesh, C.; Narayan, Vivek

    2016-01-01

    Background and Objectives: This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. Methods: In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results: Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. Conclusion: This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome. PMID:27746599

  3. Visualizing the movement of the contact between vocal folds during vibration by using array-based transmission ultrasonic glottography

    PubMed Central

    Jing, Bowen; Chigan, Pengju; Ge, Zhengtong; Wu, Liang; Wang, Supin; Wan, Mingxi

    2017-01-01

    For the purpose of noninvasively visualizing the dynamics of the contact between vibrating vocal fold medial surfaces, an ultrasonic imaging method which is referred to as array-based transmission ultrasonic glottography is proposed. An array of ultrasound transducers is used to detect the ultrasound wave transmitted from one side of the vocal folds to the other side through the small-sized contact between the vocal folds. A passive acoustic mapping method is employed to visualize and locate the contact. The results of the investigation using tissue-mimicking phantoms indicate that it is feasible to use the proposed method to visualize and locate the contact between soft tissues. Furthermore, the proposed method was used for investigating the movement of the contact between the vibrating vocal folds of excised canine larynges. The results indicate that the vertical movement of the contact can be visualized as a vertical movement of a high-intensity stripe in a series of images obtained by using the proposed method. Moreover, a visualization and analysis method, which is referred to as array-based ultrasonic kymography, is presented. The velocity of the vertical movement of the contact, which is estimated from the array-based ultrasonic kymogram, could reach 0.8 m/s during the vocal fold vibration. PMID:28599522

  4. Developmental visual perception deficits with no indications of prosopagnosia in a child with abnormal eye movements.

    PubMed

    Gilaie-Dotan, Sharon; Doron, Ravid

    2017-06-01

    Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Op art and visual perception.

    PubMed

    Wade, N J

    1978-01-01

    An attempt is made to list the visual phenomena exploited in op art. These include moire frinlude moiré fringes, afterimages, Hermann grid effects, Gestalt grouping principles, blurring and movement due to astigmatic fluctuations in accommodation, scintillation and streaming possibly due to eye movements, and visual persistence. The historical origins of these phenomena are also noted.

  6. Visual Discrimination and Motor Reproduction of Movement by Individuals with Mental Retardation.

    ERIC Educational Resources Information Center

    Shinkfield, Alison J.; Sparrow, W. A.; Day, R. H.

    1997-01-01

    Visual discrimination and motor reproduction tasks involving computer-simulated arm movements were administered to 12 adults with mental retardation and a gender-matched control group. The purpose was to examine whether inadequacies in visual perception account for the poorer motor performance of this population. Results indicate both perceptual…

  7. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    ERIC Educational Resources Information Center

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  8. Rhythmic Oscillations of Visual Contrast Sensitivity Synchronized with Action

    PubMed Central

    Tomassini, Alice; Spinelli, Donatella; Jacono, Marco; Sandini, Giulio; Morrone, Maria Concetta

    2016-01-01

    It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, ~500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop. PMID:25948254

  9. Development of interactions between sensorimotor representations in school-aged children

    PubMed Central

    KAGERER, Florian A.; CLARK, Jane E.

    2014-01-01

    Reliable sensory-motor integration is a pre-requisite for optimal movement control; the functionality of this integration changes during development. Previous research has shown that motor performance of school-age children is characterized by higher variability, particularly under conditions where vision is not available, and movement planning and control is largely based on kinesthetic input. The purpose of the current study was to determine the characteristics of how kinesthetic-motor internal representations interact with visuo-motor representations during development. To this end, we induced a visuo-motor adaptation in 59 children, ranging from 5 to 12 years of age, as well as in a group of adults, and measured initial directional error (IDE) and endpoint error (EPE) during a subsequent condition where visual feedback was not available, and participants had to rely on kinesthetic input. Our results show that older children (age range 9–12 years) de-adapted significantly more than younger children (age range 5–8 years) over the course of 36 trials in the absence of vision, suggesting that the kinesthetic-motor internal representation in the older children was utilized more efficiently to guide hand movements, and was comparable to the performance of the adults. PMID:24636697

  10. The utility of modeling word identification from visual input within models of eye movements in reading

    PubMed Central

    Bicknell, Klinton; Levy, Roger

    2012-01-01

    Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362

  11. Cognitive Control Network Contributions to Memory-Guided Visual Attention

    PubMed Central

    Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.

    2016-01-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253

  12. Alteration of the microsaccadic velocity-amplitude main sequence relationship after visual transients: implications for models of saccade control

    PubMed Central

    Chen, Chih-Yang; Tian, Xiaoguang; Idrees, Saad; Münch, Thomas A.

    2017-01-01

    Microsaccades occur during gaze fixation to correct for miniscule foveal motor errors. The mechanisms governing such fine oculomotor control are still not fully understood. In this study, we explored microsaccade control by analyzing the impacts of transient visual stimuli on these movements’ kinematics. We found that such kinematics can be altered in systematic ways depending on the timing and spatial geometry of visual transients relative to the movement goals. In two male rhesus macaques, we presented peripheral or foveal visual transients during an otherwise stable period of fixation. Such transients resulted in well-known reductions in microsaccade frequency, and our goal was to investigate whether microsaccade kinematics would additionally be altered. We found that both microsaccade timing and amplitude were modulated by the visual transients, and in predictable manners by these transients’ timing and geometry. Interestingly, modulations in the peak velocity of the same movements were not proportional to the observed amplitude modulations, suggesting a violation of the well-known “main sequence” relationship between microsaccade amplitude and peak velocity. We hypothesize that visual stimulation during movement preparation affects not only the saccadic “Go” system driving eye movements but also a “Pause” system inhibiting them. If the Pause system happens to be already turned off despite the new visual input, movement kinematics can be altered by the readout of additional visually evoked spikes in the Go system coding for the flash location. Our results demonstrate precise control over individual microscopic saccades and provide testable hypotheses for mechanisms of saccade control in general. NEW & NOTEWORTHY Microsaccadic eye movements play an important role in several aspects of visual perception and cognition. However, the mechanisms for microsaccade control are still not fully understood. We found that microsaccade kinematics can be altered in a systematic manner by visual transients, revealing a previously unappreciated and exquisite level of control by the oculomotor system of even the smallest saccades. Our results suggest precise temporal interaction between visual, motor, and inhibitory signals in microsaccade control. PMID:28202573

  13. Seeing the hand while reaching speeds up on-line responses to a sudden change in target position

    PubMed Central

    Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre

    2009-01-01

    Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067

  14. Segregation of Form, Color, Movement, and Depth: Anatomy, Physiology, and Perception

    NASA Astrophysics Data System (ADS)

    Livingstone, Margaret; Hubel, David

    1988-05-01

    Anatomical and physiological observations in monkeys indicate that the primate visual system consists of several separate and independent subdivisions that analyze different aspects of the same retinal image: cells in cortical visual areas 1 and 2 and higher visual areas are segregated into three interdigitating subdivisions that differ in their selectivity for color, stereopsis, movement, and orientation. The pathways selective for form and color seem to be derived mainly from the parvocellular geniculate subdivisions, the depth- and movement-selective components from the magnocellular. At lower levels, in the retina and in the geniculate, cells in these two subdivisions differ in their color selectivity, contrast sensitivity, temporal properties, and spatial resolution. These major differences in the properties of cells at lower levels in each of the subdivisions led to the prediction that different visual functions, such as color, depth, movement, and form perception, should exhibit corresponding differences. Human perceptual experiments are remarkably consistent with these predictions. Moreover, perceptual experiments can be designed to ask which subdivisions of the system are responsible for particular visual abilities, such as figure/ground discrimination or perception of depth from perspective or relative movement--functions that might be difficult to deduce from single-cell response properties.

  15. Action preparation modulates sensory perception in unseen personal space: An electrophysiological investigation.

    PubMed

    Job, Xavier E; de Fockert, Jan W; van Velzen, José

    2016-08-01

    Behavioural and electrophysiological evidence has demonstrated that preparation of goal-directed actions modulates sensory perception at the goal location before the action is executed. However, previous studies have focused on sensory perception in areas of peripersonal space. The present study investigated visual and tactile sensory processing at the goal location of upcoming movements towards the body, much of which is not visible, as well as visible peripersonal space. A motor task cued participants to prepare a reaching movement towards goals either in peripersonal space in front of them or personal space on the upper chest. In order to assess modulations of sensory perception during movement preparation, event-related potentials (ERPs) were recorded in response to task-irrelevant visual and tactile probe stimuli delivered randomly at one of the goal locations of the movements. In line with previous neurophysiological findings, movement preparation modulated visual processing at the goal of a movement in peripersonal space. Movement preparation also modulated somatosensory processing at the movement goal in personal space. The findings demonstrate that tactile perception in personal space is subject to similar top-down sensory modulation by motor preparation as observed for visual stimuli presented in peripersonal space. These findings show for the first time that the principles and mechanisms underlying adaptive modulation of sensory processing in the context of action extend to tactile perception in unseen personal space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  17. Human Subthalamic Nucleus in Movement Error Detection and Its Evaluation during Visuomotor Adaptation

    PubMed Central

    Zavala, Baltazar; Pogosyan, Alek; Ashkan, Keyoumars; Zrinzo, Ludvic; Foltynie, Thomas; Limousin, Patricia; Brown, Peter

    2014-01-01

    Monitoring and evaluating movement errors to guide subsequent movements is a critical feature of normal motor control. Previously, we showed that the postmovement increase in electroencephalographic (EEG) beta power over the sensorimotor cortex reflects neural processes that evaluate motor errors consistent with Bayesian inference (Tan et al., 2014). Whether such neural processes are limited to this cortical region or involve the basal ganglia is unclear. Here, we recorded EEG over the cortex and local field potential (LFP) activity in the subthalamic nucleus (STN) from electrodes implanted in patients with Parkinson's disease, while they moved a joystick-controlled cursor to visual targets displayed on a computer screen. After movement offsets, we found increased beta activity in both local STN LFP and sensorimotor cortical EEG and in the coupling between the two, which was affected by both error magnitude and its contextual saliency. The postmovement increase in the coupling between STN and cortex was dominated by information flow from sensorimotor cortex to STN. However, an information drive appeared from STN to sensorimotor cortex in the first phase of the adaptation, when a constant rotation was applied between joystick inputs and cursor outputs. The strength of the STN to cortex drive correlated with the degree of adaption achieved across subjects. These results suggest that oscillatory activity in the beta band may dynamically couple the sensorimotor cortex and basal ganglia after movements. In particular, beta activity driven from the STN to cortex indicates task-relevant movement errors, information that may be important in modifying subsequent motor responses. PMID:25505327

  18. Inertial torque during reaching directly impacts grip-force adaptation to weightless objects.

    PubMed

    Giard, T; Crevecoeur, F; McIntyre, J; Thonnard, J-L; Lefèvre, P

    2015-11-01

    A hallmark of movement control expressed by healthy humans is the ability to gradually improve motor performance through learning. In the context of object manipulation, previous work has shown that the presence of a torque load has a direct impact on grip-force control, characterized by a significantly slower grip-force adjustment across lifting movements. The origin of this slower adaptation rate remains unclear. On the one hand, information about tangential constraints during stationary holding may be difficult to extract in the presence of a torque. On the other hand, inertial torque experienced during movement may also potentially disrupt the grip-force adjustments, as the dynamical constraints clearly differ from the situation when no torque load is present. To address the influence of inertial torque loads, we instructed healthy adults to perform visually guided reaching movements in weightlessness while holding an unbalanced object relative to the grip axis. Weightlessness offered the possibility to remove gravitational constraints and isolate the effect of movement-related feedback on grip force adjustments. Grip-force adaptation rates were compared with a control group who manipulated a balanced object without any torque load and also in weightlessness. Our results clearly show that grip-force adaptation in the presence of a torque load is significantly slower, which suggests that the presence of torque loads experienced during movement may alter our internal estimates of how much force is required to hold an unbalanced object stable. This observation may explain why grasping objects around the expected location of the center of mass is such an important component of planning and control of manipulation tasks.

  19. Visuokinesthetic Perception of Hand Movement is Mediated by Cerebro–Cerebellar Interaction between the Left Cerebellum and Right Parietal Cortex

    PubMed Central

    Hagura, Nobuhiro; Oouchida, Yutaka; Aramaki, Yu; Okada, Tomohisa; Matsumura, Michikazu; Sadato, Norihiro

    2009-01-01

    Combination of visual and kinesthetic information is essential to perceive bodily movements. We conducted behavioral and functional magnetic resonance imaging experiments to investigate the neuronal correlates of visuokinesthetic combination in perception of hand movement. Participants experienced illusory flexion movement of their hand elicited by tendon vibration while they viewed video-recorded flexion (congruent: CONG) or extension (incongruent: INCONG) motions of their hand. The amount of illusory experience was graded by the visual velocities only when visual information regarding hand motion was concordant with kinesthetic information (CONG). The left posterolateral cerebellum was specifically recruited under the CONG, and this left cerebellar activation was consistent for both left and right hands. The left cerebellar activity reflected the participants' intensity of illusory hand movement under the CONG, and we further showed that coupling of activity between the left cerebellum and the “right” parietal cortex emerges during this visuokinesthetic combination/perception. The “left” cerebellum, working with the anatomically connected high-order bodily region of the “right” parietal cortex, participates in online combination of exteroceptive (vision) and interoceptive (kinesthesia) information to perceive hand movement. The cerebro–cerebellar interaction may underlie updating of one's “body image,” when perceiving bodily movement from visual and kinesthetic information. PMID:18453537

  20. Normalizing motor-related brain activity: subthalamic nucleus stimulation in Parkinson disease.

    PubMed

    Grafton, S T; Turner, R S; Desmurget, M; Bakay, R; Delong, M; Vitek, J; Crutcher, M

    2006-04-25

    To test whether therapeutic unilateral deep brain stimulation (DBS) of the subthalamic nucleus (STN) in patients with Parkinson disease (PD) leads to normalization in the pattern of brain activation during movement execution and control of movement extent. Six patients with PD were imaged off medication by PET during performance of a visually guided tracking task with the DBS voltage programmed for therapeutic (effective) or subtherapeutic (ineffective) stimulation. Data from patients with PD during ineffective stimulation were compared with a group of 13 age-matched control subjects to identify sites with abnormal patterns of activation. Conjunction analysis was used to identify those areas in patients with PD where activity normalized when they were treated with effective stimulation. For movement execution, effective DBS caused an increase of activation in the supplementary motor area (SMA), superior parietal cortex, and cerebellum toward a more normal pattern. At rest, effective stimulation reduced overactivity of SMA. Therapeutic stimulation also induced reductions of movement related "overactivity" compared with healthy subjects in prefrontal, temporal lobe, and basal ganglia circuits, consistent with the notion that many areas are recruited to compensate for ineffective motor initiation. Normalization of activity related to the control of movement extent was associated with reductions of activity in primary motor cortex, SMA, and basal ganglia. Effective subthalamic nucleus stimulation leads to task-specific modifications with appropriate recruitment of motor areas as well as widespread, nonspecific reductions of compensatory or competing cortical activity.

  1. Early visuomotor representations revealed from evoked local field potentials in motor and premotor cortical areas.

    PubMed

    O'Leary, John G; Hatsopoulos, Nicholas G

    2006-09-01

    Local field potentials (LFPs) recorded from primary motor cortex (MI) have been shown to be tuned to the direction of visually guided reaching movements, but MI LFPs have not been shown to be tuned to the direction of an upcoming movement during the delay period that precedes movement in an instructed-delay reaching task. Also, LFPs in dorsal premotor cortex (PMd) have not been investigated in this context. We therefore recorded LFPs from MI and PMd of monkeys (Macaca mulatta) and investigated whether these LFPs were tuned to the direction of the upcoming movement during the delay period. In three frequency bands we identified LFP activity that was phase-locked to the onset of the instruction stimulus that specified the direction of the upcoming reach. The amplitude of this activity was often tuned to target direction with tuning widths that varied across different electrodes and frequency bands. Single-trial decoding of LFPs demonstrated that prediction of target direction from this activity was possible well before the actual movement is initiated. Decoding performance was significantly better in the slowest-frequency band compared with that in the other two higher-frequency bands. Although these results demonstrate that task-related information is available in the local field potentials, correlations among these signals recorded from a densely packed array of electrodes suggests that adequate decoding performance for neural prosthesis applications may be limited as the number of simultaneous electrode recordings is increased.

  2. Sensory signals during active versus passive movement.

    PubMed

    Cullen, Kathleen E

    2004-12-01

    Our sensory systems are simultaneously activated as the result of our own actions and changes in the external world. The ability to distinguish self-generated sensory events from those that arise externally is thus essential for perceptual stability and accurate motor control. Recently, progress has been made towards understanding how this distinction is made. It has been proposed that an internal prediction of the consequences of our actions is compared to the actual sensory input to cancel the resultant self-generated activation. Evidence in support of this hypothesis has been obtained for early stages of sensory processing in the vestibular, visual and somatosensory systems. These findings have implications for the sensory-motor transformations that are needed to guide behavior.

  3. Interaction between Visual- and Goal-Related Neuronal Signals on the Trajectories of Saccadic Eye Movements

    ERIC Educational Resources Information Center

    White, Brian J.; Theeuwes, Jan; Munoz, Douglas P.

    2012-01-01

    During natural viewing, the trajectories of saccadic eye movements often deviate dramatically from a straight-line path between objects. In human studies, saccades have been shown to deviate toward or away from salient visual distractors depending on visual- and goal-related parameters, but the neurophysiological basis for this is not well…

  4. Can Short Duration Visual Cues Influence Students' Reasoning and Eye Movements in Physics Problems?

    ERIC Educational Resources Information Center

    Madsen, Adrian; Rouinfar, Amy; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay

    2013-01-01

    We investigate the effects of visual cueing on students' eye movements and reasoning on introductory physics problems with diagrams. Participants in our study were randomly assigned to either the cued or noncued conditions, which differed by whether the participants saw conceptual physics problems overlaid with dynamic visual cues. Students in the…

  5. Saccadic Eye Movements Impose a Natural Bottleneck on Visual Short-Term Memory

    ERIC Educational Resources Information Center

    Ohl, Sven; Rolfs, Martin

    2017-01-01

    Visual short-term memory (VSTM) is a crucial repository of information when events unfold rapidly before our eyes, yet it maintains only a fraction of the sensory information encoded by the visual system. Here, we tested the hypothesis that saccadic eye movements provide a natural bottleneck for the transition of fragile content in sensory memory…

  6. Neural control of visual search by frontal eye field: effects of unexpected target displacement on visual selection and saccade preparation.

    PubMed

    Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G

    2009-05-01

    The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.

  7. The association of visually-assessed quality of movement during jump-landing with ankle dorsiflexion range-of-motion and hip abductor muscle strength among healthy female athletes.

    PubMed

    Rabin, Alon; Einstein, Ofira; Kozol, Zvi

    2018-05-01

    To explore the association between ankle dorsiflexion (DF) range of motion (ROM), and hip abductor muscle strength, to visually-assessed quality of movement during jump-landing. Cross-sectional. Gymnasium of participating teams. 37 female volleyball players. Quality of movement in the frontal-plane, sagittal-plane, and overall (both planes) was visually rated as "good/moderate" or "poor". Weight-bearing Ankle DF ROM and hip abductor muscle strength were compared between participants with differing quality of movement. Weight-bearing DF ROM on both sides was decreased among participants with "poor" sagittal-plane quality of movement (dominant side: 50.8° versus 43.6°, P = .02; non-dominant side: 54.6° versus 45.9°, P = .01), as well as among participants with an overall "poor" quality of movement (dominant side: 51.8° versus 44.0°, P < .01; non-dominant side: 56.5° versus 45.1°, P < .01). Weight-bearing ankle DF on the non-dominant side was decreased among participants with a "poor" frontal-plane quality of movement (53.9° versus 46.0°, P = .02). No differences in hip abductor muscle strength were noted between participants with differing quality of movement. Visual assessment of jump-landing can detect differences in quality of movement that are associated with ankle DF ROM. Clinicians observing a poor quality of movement may wish to assess ankle DF ROM. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Psychogenic Tremor: A Video Guide to Its Distinguishing Features

    PubMed Central

    Thenganatt, Mary Ann; Jankovic, Joseph

    2014-01-01

    Background Psychogenic tremor is the most common psychogenic movement disorder. It has characteristic clinical features that can help distinguish it from other tremor disorders. There is no diagnostic gold standard and the diagnosis is based primarily on clinical history and examination. Despite proposed diagnostic criteria, the diagnosis of psychogenic tremor can be challenging. While there are numerous studies evaluating psychogenic tremor in the literature, there are no publications that provide a video/visual guide that demonstrate the clinical characteristics of psychogenic tremor. Educating clinicians about psychogenic tremor will hopefully lead to earlier diagnosis and treatment. Methods We selected videos from the database at the Parkinson’s Disease Center and Movement Disorders Clinic at Baylor College of Medicine that illustrate classic findings supporting the diagnosis of psychogenic tremor. Results We include 10 clinical vignettes with accompanying videos that highlight characteristic clinical signs of psychogenic tremor including distractibility, variability, entrainability, suggestibility, and coherence. Discussion Psychogenic tremor should be considered in the differential diagnosis of patients presenting with tremor, particularly if it is of abrupt onset, intermittent, variable and not congruous with organic tremor. The diagnosis of psychogenic tremor, however, should not be simply based on exclusion of organic tremor, such as essential, parkinsonian, or cerebellar tremor, but on positive criteria demonstrating characteristic features. Early recognition and management are critical for good long-term outcome. PMID:25243097

  9. fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.

    PubMed

    Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W

    2008-01-01

    Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.

  10. Non-Instrumental Movement Inhibition (NIMI) Differentially Suppresses Head and Thigh Movements during Screenic Engagement: Dependence on Interaction

    PubMed Central

    Witchel, Harry J.; Santos, Carlos P.; Ackah, James K.; Westling, Carina E. I.; Chockalingam, Nachiappan

    2016-01-01

    Background: Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. Hypotheses: (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Methods: Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. Results: For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. Conclusions: NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus. PMID:26941666

  11. Non-Instrumental Movement Inhibition (NIMI) Differentially Suppresses Head and Thigh Movements during Screenic Engagement: Dependence on Interaction.

    PubMed

    Witchel, Harry J; Santos, Carlos P; Ackah, James K; Westling, Carina E I; Chockalingam, Nachiappan

    2016-01-01

    Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it. (1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally. Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed. For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement. NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus.

  12. [Cortical potentials evoked to response to a signal to make a memory-guided saccade].

    PubMed

    Slavutskaia, M V; Moiseeva, V V; Shul'govskiĭ, V V

    2010-01-01

    The difference in parameters of visually guided and memory-guided saccades was shown. Increase in the memory-guided saccade latency as compared to that of the visually guided saccades may indicate the deceleration of saccadic programming on the basis of information extraction from the memory. The comparison of parameters and topography of evoked components N1 and P1 of the evoked potential on the signal to make a memory- or visually guided saccade suggests that the early stage of the saccade programming associated with the space information processing is performed predominantly with top-down attention mechanism before the memory-guided saccade and bottom-up mechanism before the visually guided saccade. The findings show that the increase in the latency of the memory-guided saccades is connected with decision making at the central stage of the saccade programming. We proposed that wave N2, which develops in the middle of the latent period of the memory-guided saccades, is correlated with this process. Topography and spatial dynamics of components N1, P1 and N2 testify that the memory-guided saccade programming is controlled by the frontal mediothalamic system of selective attention and left-hemispheric brain mechanisms of motor attention.

  13. Consolidation of visuomotor adaptation memory with consistent and noisy environments

    PubMed Central

    Maeda, Rodrigo S.; McGee, Steven E.

    2016-01-01

    Our understanding of how we learn and retain motor behaviors is still limited. For instance, there is conflicting evidence as to whether the memory of a learned visuomotor perturbation consolidates; i.e., the motor memory becomes resistant to interference from learning a competing perturbation over time. Here, we sought to determine the factors that influence consolidation during visually guided walking. Subjects learned a novel mapping relationship, created by prism lenses, between the perceived location of two targets and the motor commands necessary to direct the feet to their positions. Subjects relearned this mapping 1 wk later. Different groups experienced protocols with or without a competing mapping (and with and without washout trials), presented either on the same day as initial learning or before relearning on day 2. We tested identical protocols under constant and noisy mapping structures. In the latter, we varied, on a trial-by-trial basis, the strength of prism lenses around a non-zero mean. We found that a novel visuomotor mapping is retained at least 1 wk after initial learning. We also found reduced foot-placement error with relearning in constant and noisy mapping groups, despite learning a competing mapping beforehand, and with the exception of one protocol, with and without washout trials. Exposure to noisy mappings led to similar performance on relearning compared with the equivalent constant mapping groups for most protocols. Overall, our results support the idea of motor memory consolidation during visually guided walking and suggest that constant and noisy practices are effective for motor learning. NEW & NOTEWORTHY The adaptation of movement is essential for many daily activities. To interact with targets, this often requires learning the mapping to produce appropriate motor commands based on visual input. Here, we show that a novel visuomotor mapping is retained 1 wk after initial learning in a visually guided walking task. Furthermore, we find that this motor memory consolidates (i.e., becomes more resistant to interference from learning a competing mapping) when learning in constant and noisy mapping environments. PMID:27784800

  14. Mental Imagery as Revealed by Eye Movements and Spoken Predicates: A Test of Neurolinguistic Programming.

    ERIC Educational Resources Information Center

    Elich, Matthew; And Others

    1985-01-01

    Tested Bandler and Grinder's proposal that eye movement direction and spoken predicates are indicative of sensory modality of imagery. Subjects reported images in the three modes, but no relation between imagery and eye movements or predicates was found. Visual images were most vivid and often reported. Most subjects rated themselves as visual,…

  15. Impaired Visual Motor Coordination in Obese Adults.

    PubMed

    Gaul, David; Mat, Arimin; O'Shea, Donal; Issartel, Johann

    2016-01-01

    Objective. To investigate whether obesity alters the sensory motor integration process and movement outcome during a visual rhythmic coordination task. Methods. 88 participants (44 obese and 44 matched control) sat on a chair equipped with a wrist pendulum oscillating in the sagittal plane. The task was to swing the pendulum in synchrony with a moving visual stimulus displayed on a screen. Results. Obese participants demonstrated significantly ( p < 0.01) higher values for continuous relative phase (CRP) indicating poorer level of coordination, increased movement variability ( p < 0.05), and a larger amplitude ( p < 0.05) than their healthy weight counterparts. Conclusion. These results highlight the existence of visual sensory integration deficiencies for obese participants. The obese group have greater difficulty in synchronizing their movement with a visual stimulus. Considering that visual motor coordination is an essential component of many activities of daily living, any impairment could significantly affect quality of life.

  16. Effects of visual and verbal interaction on unintentional interpersonal coordination.

    PubMed

    Richardson, Michael J; Marsh, Kerry L; Schmidt, R C

    2005-02-01

    Previous research has demonstrated that people's movements can become unintentionally coordinated during interpersonal interaction. The current study sought to uncover the degree to which visual and verbal (conversation) interaction constrains and organizes the rhythmic limb movements of coactors. Two experiments were conducted in which pairs of participants completed an interpersonal puzzle task while swinging handheld pendulums with instructions that minimized intentional coordination but facilitated either visual or verbal interaction. Cross-spectral analysis revealed a higher degree of coordination for conditions in which the pairs were visually coupled. In contrast, verbal interaction alone was not found to provide a sufficient medium for unintentional coordination to occur, nor did it enhance the unintentional coordination that emerged during visual interaction. The results raise questions concerning differences between visual and verbal informational linkages during interaction and how these differences may affect interpersonal movement production and its coordination.

  17. Role of the posterior parietal cortex in updating reaching movements to a visual target.

    PubMed

    Desmurget, M; Epstein, C M; Turner, R S; Prablanc, C; Alexander, G E; Grafton, S T

    1999-06-01

    The exact role of posterior parietal cortex (PPC) in visually directed reaching is unknown. We propose that, by building an internal representation of instantaneous hand location, PPC computes a dynamic motor error used by motor centers to correct the ongoing trajectory. With unseen right hands, five subjects pointed to visual targets that either remained stationary or moved during saccadic eye movements. Transcranial magnetic stimulation (TMS) was applied over the left PPC during target presentation. Stimulation disrupted path corrections that normally occur in response to target jumps, but had no effect on those directed at stationary targets. Furthermore, left-hand movement corrections were not blocked, ruling out visual or oculomotor effects of stimulation.

  18. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  19. Physical Education for Tomorrow.

    ERIC Educational Resources Information Center

    Ohio State Univ., Columbus. Center for Vocational and Technical Education.

    The learning experiences in the teacher's guide are built on the concept of movement exploration. Self-awareness is realized as students discover potentials for performing basic motor skills and explore creative movement. Intended for use at the preschool and primary levels, the guide suggests and describes ways for the teacher to introduce and…

  20. Shape of magnifiers affects controllability in children with visual impairment.

    PubMed

    Liebrand-Schurink, Joyce; Boonstra, F Nienke; van Rens, Ger H M B; Cillessen, Antonius H N; Meulenbroek, Ruud G J; Cox, Ralf F A

    2016-12-01

    This study aimed to examine the controllability of cylinder-shaped and dome-shaped magnifiers in young children with visual impairment. This study investigates goal-directed arm movements in low-vision aid use (stand and dome magnifier-like object) in a group of young children with visual impairment (n = 56) compared to a group of children with normal sight (n = 66). Children with visual impairment and children with normal sight aged 4-8 years executed two types of movements (cyclic and discrete) in two orientations (vertical or horizontal) over two distances (10 cm and 20 cm) with two objects resembling the size and shape of regularly prescribed stand and dome magnifiers. The visually impaired children performed slower movements than the normally sighted children. In both groups, the accuracy and speed of the reciprocal aiming movements improved significantly with age. Surprisingly, in both groups, the performance with the dome-shaped object was significantly faster (in the 10 cm condition and 20 cm condition with discrete movements) and more accurate (in the 20 cm condition) than with the stand-shaped object. From a controllability perspective, this study suggests that it is better to prescribe dome-shaped than cylinder-shaped magnifiers to young children with visual impairment. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  1. Cognitive-motor integration deficits in young adult athletes following concussion.

    PubMed

    Brown, Jeffrey A; Dalecki, Marc; Hughes, Cindy; Macpherson, Alison K; Sergio, Lauren E

    2015-01-01

    The ability to perform visually-guided motor tasks requires the transformation of visual information into programmed motor outputs. When the guiding visual information does not align spatially with the motor output, the brain processes rules to integrate the information for an appropriate motor response. Here, we look at how performance on such tasks is affected in young adult athletes with concussion history. Participants displaced a cursor from a central to peripheral targets on a vertical display by sliding their finger along a touch sensitive screen in one of two spatial planes. The addition of a memory component, along with variations in cursor feedback increased task complexity across conditions. Significant main effects between participants with concussion history and healthy controls without concussion history were observed in timing and accuracy measures. Importantly, the deficits were distinctly more pronounced for participants with concussion history compared to healthy controls, especially when the brain had to control movements having two levels of decoupling between vision and action. A discriminant analysis correctly classified athletes with a history of concussion based on task performance with an accuracy of 94 %, despite the majority of these athletes being rated asymptomatic by current standards. These findings correspond to our previous work with adults at risk of developing dementia, and support the use of cognitive motor integration as an enhanced assessment tool for those who may have mild brain dysfunction. Such a task may provide a more sensitive metric of performance relevant to daily function than what is currently in use, to assist in return to play/work/learn decisions.

  2. HELIRADAR technology for helicopter all-weather operations

    NASA Astrophysics Data System (ADS)

    Kreitmair-Steck, Wolfgang; Braun, Guenter

    1997-06-01

    Currently available radar instruments are not capable of guiding a helicopter pilot safely during approach and landing under poor visibility conditions. This is due to lack of resolution and lack of elevation information. The RADAR technology that promises to improve this situation is called ROSAR, which stands for Synthetic Aperture Radar based on ROtating Antennas. In 1992 Eurocopter and Daimler- Benz Aerospace investigated the feasibility of an imaging radar based on ROSAR technology. The objective was to provide a video-like image with a resolution good enough to safely guide a helicopter pilot under poor visibility conditions. ROSAR proved to be especially well suited for this type of application since it allows for a stationary carrier platform: Rotating arms with antennas integrated into their tips can be mounted on top of the rotor head. In this way the scanning region of the antennas can cover 360 degree(s). While rotating, the antenna scans the environment from various visual angles without assuming a movement of the carrier platform itself. The signal is then processed as a function of the rotation angle of the antenna movement along a circular path. A radar system of this type is now under development at Eurocopter and Daimler-Benz Aerospace: HeliRadar. HeliRadar is designed as a frequency modulated continuous wave radar working in a frequency band around 35 GHz. The complete transmitter/receiver system is fixed mounted on top of the rotating axis of the helicopter. The received signals are transferred through the center of the rotor axis down into the cabin of the helicopter, where they are processed in a high performance digital signal processor (processing power: 10 GFLOPS). First encouraging results have been obtained from an experiment with `slow motion' movement of the antenna arm.

  3. Infantile nystagmus adapts to visual demand.

    PubMed

    Wiggins, Debbie; Woodhouse, J Margaret; Margrain, Tom H; Harris, Christopher M; Erichsen, Jonathan T

    2007-05-01

    To determine the effect of visual demand on the nystagmus waveform. Individuals with infantile nystagmus syndrome (INS) commonly report that making an effort to see can intensify their nystagmus and adversely affect vision. However, such an effect has never been confirmed experimentally. The eye movement behavior of 11 subjects with INS were recorded at different gaze angles while the subjects viewed visual targets under two conditions: above and then at resolution threshold. Eye movements were recorded by infrared oculography and visual acuity (VA) was measured using Landolt C targets and a two-alternative, forced-choice (2AFC) staircase procedure. Eye movement data were analyzed at the null zone for changes in amplitude, frequency, intensity, and foveation characteristics. Waveform type was also noted under the two conditions. Data from 11 subjects revealed a significant reduction in nystagmus amplitude (P < 0.05), frequency (P < 0.05), and intensity (P < 0.01) when target size was at visual threshold. The percentage of time the eye spent within the low-velocity window (i.e., foveation) significantly increased when target size was at visual threshold (P < 0.05). Furthermore, a change in waveform type with increased visual demand was exhibited by two subjects. The results indicate that increased visual demand modifies the nystagmus waveform favorably (and possibly adaptively), producing a significant reduction in nystagmus intensity and prolonged foveation. These findings contradict previous anecdotal reports that visual effort intensifies the nystagmus eye movement at the cost of visual performance. This discrepancy may be attributable to the lack of psychological stress involved in the visual task reported here. This is consistent with the suggestion that it is the visual importance of the task to the individual rather than visual demand per se which exacerbates INS. Further studies are needed to investigate quantitatively the effects of stress and psychological factors on INS waveforms.

  4. Subthalamic nucleus detects unnatural android movement.

    PubMed

    Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi

    2017-12-19

    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.

  5. Airway management of patients with traumatic brain injury/C-spine injury

    PubMed Central

    2015-01-01

    Traumatic brain injury (TBI) is usually combined with cervical spine (C-spine) injury. The possibility of C-spine injury is always considered when performing endotracheal intubation in these patients. Rapid sequence intubation is recommended with adequate sedative or analgesics and a muscle relaxant to prevent an increase in intracranial pressure during intubation in TBI patients. Normocapnia and mild hyperoxemia should be maintained to prevent secondary brain injury. The manual-in-line-stabilization (MILS) technique effectively lessens C-spine movement during intubation. However, the MILS technique can reduce mouth opening and lead to a poor laryngoscopic view. The newly introduced video laryngoscope can manage these problems. The AirWay Scope® (AWS) and AirTraq laryngoscope decreased the extension movement of C-spines at the occiput-C1 and C2-C4 levels, improving intubation conditions and shortening the time to complete tracheal intubation compared with a direct laryngoscope. The Glidescope® also decreased cervical movement in the C2-C5 levels during intubation and improved vocal cord visualization, but a longer duration was required to complete intubation compared with other devices. A lightwand also reduced cervical motion across all segments. A fiberoptic bronchoscope-guided nasal intubation is the best method to reduce cervical movement, but a skilled operator is required. In conclusion, a video laryngoscope assists airway management in TBI patients with C-spine injury. PMID:26045922

  6. Assisting Movement Training and Execution With Visual and Haptic Feedback.

    PubMed

    Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme

    2018-01-01

    In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.

  7. Are there right hemisphere contributions to visually-guided movement? Manipulating left hand reaction time advantages in dextrals.

    PubMed

    Carey, David P; Otto-de Haart, E Grace; Buckingham, Gavin; Dijkerman, H Chris; Hargreaves, Eric L; Goodale, Melvyn A

    2015-01-01

    Many studies have argued for distinct but complementary contributions from each hemisphere in the control of movements to visual targets. Investigators have attempted to extend observations from patients with unilateral left- and right-hemisphere damage, to those using neurologically-intact participants, by assuming that each hand has privileged access to the contralateral hemisphere. Previous attempts to illustrate right hemispheric contributions to the control of aiming have focussed on increasing the spatial demands of an aiming task, to attenuate the typical right hand advantages, to try to enhance a left hand reaction time advantage in right-handed participants. These early attempts have not been successful. The present study circumnavigates some of the theoretical and methodological difficulties of some of the earlier experiments, by using three different tasks linked directly to specialized functions of the right hemisphere: bisecting, the gap effect, and visuospatial localization. None of these tasks were effective in reducing the magnitude of left hand reaction time advantages in right handers. Results are discussed in terms of alternatives to right hemispheric functional explanations of the effect, the one-dimensional nature of our target arrays, power and precision given the size of the left hand RT effect, and the utility of examining the proportions of participants who show these effects, rather than exclusive reliance on measures of central tendency and their associated null hypothesis significance tests.

  8. Are there right hemisphere contributions to visually-guided movement? Manipulating left hand reaction time advantages in dextrals

    PubMed Central

    Carey, David P.; Otto-de Haart, E. Grace; Buckingham, Gavin; Dijkerman, H. Chris; Hargreaves, Eric L.; Goodale, Melvyn A.

    2015-01-01

    Many studies have argued for distinct but complementary contributions from each hemisphere in the control of movements to visual targets. Investigators have attempted to extend observations from patients with unilateral left- and right-hemisphere damage, to those using neurologically-intact participants, by assuming that each hand has privileged access to the contralateral hemisphere. Previous attempts to illustrate right hemispheric contributions to the control of aiming have focussed on increasing the spatial demands of an aiming task, to attenuate the typical right hand advantages, to try to enhance a left hand reaction time advantage in right-handed participants. These early attempts have not been successful. The present study circumnavigates some of the theoretical and methodological difficulties of some of the earlier experiments, by using three different tasks linked directly to specialized functions of the right hemisphere: bisecting, the gap effect, and visuospatial localization. None of these tasks were effective in reducing the magnitude of left hand reaction time advantages in right handers. Results are discussed in terms of alternatives to right hemispheric functional explanations of the effect, the one-dimensional nature of our target arrays, power and precision given the size of the left hand RT effect, and the utility of examining the proportions of participants who show these effects, rather than exclusive reliance on measures of central tendency and their associated null hypothesis significance tests. PMID:26379572

  9. Qualitative evaluation of water displacement in simulated analytical breaststroke movements.

    PubMed

    Martens, Jonas; Daly, Daniel

    2012-05-01

    One purpose of evaluating a swimmer is to establish the individualized optimal technique. A swimmer's particular body structure and the resulting movement pattern will cause the surrounding water to react in differing ways. Consequently, an assessment method based on flow visualization was developed complimentary to movement analysis and body structure quantification. A fluorescent dye was used to make the water displaced by the body visible on video. To examine the hypothesis on the propulsive mechanisms applied in breaststroke swimming, we analyzed the movements of the surrounding water during 4 analytical breaststroke movements using the flow visualization technique.

  10. Visual Arts: A Guide to Curriculum Development in the Arts.

    ERIC Educational Resources Information Center

    Iowa State Dept. of Public Instruction, Des Moines.

    This visual arts curriculum guide was developed as a subset of a model curriculum for the arts as mandated by the Iowa legislature. It is designed to be used in conjunction with the Visual Arts in Iowa Schools (VAIS). The guide is divided into six sections: Sections one and two contain the preface, acknowledgements, and a list of members of the…

  11. Parkinson’s disease patients show impaired corrective grasp control and eye-hand coupling when reaching to grasp virtual objects

    PubMed Central

    Lukos, Jamie R.; Snider, Joseph; Hernandez, Manuel E.; Tunik, Eugene; Hillyard, Steven; Poizner, Howard

    2013-01-01

    The effect of Parkinson’s disease on hand-eye coordination and corrective response control during reach-to-grasp tasks remains unclear. Moderately impaired Parkinson’s disease patients (PD, n=9) and age-matched controls (n=12) reached to and grasped a virtual rectangular object, with haptic feedback provided to the thumb and index fingertip by two 3-degree of freedom manipulanda. The object rotated unexpectedly on a minority of trials, requiring subjects to adjust their grasp aperture. On half the trials, visual feedback of finger positions disappeared during the initial phase of the reach, when feedforward mechanisms are known to guide movement. PD patients were tested without (OFF) and with (ON) medication to investigate the effects of dopamine depletion and repletion on eye-hand coordination online corrective response control. We quantified eye-hand coordination by monitoring hand kinematics and eye position during the reach. We hypothesized that if the basal ganglia are important for eye-hand coordination and online corrections to object perturbations, then PD patients tested OFF medication would show reduced eye-hand spans and impoverished arm-hand coordination responses to the perturbation, which would be further exasperated when visual feedback of the hand was removed. Strikingly, PD patients tracked their hands with their gaze, and their movements became destabilized when having to make online corrective responses to object perturbations exhibiting pauses and changes in movement direction. These impairments largely remained even when tested in the ON state, despite significant improvement on the Unified Parkinson’s Disease Rating Scale. Our findings suggest that basal ganglia-cortical loops are essential for mediating eye-hand coordination and adaptive online responses for reach-to-grasp movements, and that restoration of tonic levels of dopamine may not be adequate to remediate this coordinative nature of basal ganglia modulated function. PMID:24056196

  12. Neural representations of contextual guidance in visual search of real-world scenes.

    PubMed

    Preston, Tim J; Guo, Fei; Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P

    2013-05-01

    Exploiting scene context and object-object co-occurrence is critical in guiding eye movements and facilitating visual search, yet the mediating neural mechanisms are unknown. We used functional magnetic resonance imaging while observers searched for target objects in scenes and used multivariate pattern analyses (MVPA) to show that the lateral occipital complex (LOC) can predict the coarse spatial location of observers' expectations about the likely location of 213 different targets absent from the scenes. In addition, we found weaker but significant representations of context location in an area related to the orienting of attention (intraparietal sulcus, IPS) as well as a region related to scene processing (retrosplenial cortex, RSC). Importantly, the degree of agreement among 100 independent raters about the likely location to contain a target object in a scene correlated with LOC's ability to predict the contextual location while weaker but significant effects were found in IPS, RSC, the human motion area, and early visual areas (V1, V3v). When contextual information was made irrelevant to observers' behavioral task, the MVPA analysis of LOC and the other areas' activity ceased to predict the location of context. Thus, our findings suggest that the likely locations of targets in scenes are represented in various visual areas with LOC playing a key role in contextual guidance during visual search of objects in real scenes.

  13. Residual attention guidance in blindsight monkeys watching complex natural scenes.

    PubMed

    Yoshida, Masatoshi; Itti, Laurent; Berg, David J; Ikeda, Takuro; Kato, Rikako; Takaura, Kana; White, Brian J; Munoz, Douglas P; Isa, Tadashi

    2012-08-07

    Patients with damage to primary visual cortex (V1) demonstrate residual performance on laboratory visual tasks despite denial of conscious seeing (blindsight) [1]. After a period of recovery, which suggests a role for plasticity [2], visual sensitivity higher than chance is observed in humans and monkeys for simple luminance-defined stimuli, grating stimuli, moving gratings, and other stimuli [3-7]. Some residual cognitive processes including bottom-up attention and spatial memory have also been demonstrated [8-10]. To date, little is known about blindsight with natural stimuli and spontaneous visual behavior. In particular, is orienting attention toward salient stimuli during free viewing still possible? We used a computational saliency map model to analyze spontaneous eye movements of monkeys with blindsight from unilateral ablation of V1. Despite general deficits in gaze allocation, monkeys were significantly attracted to salient stimuli. The contribution of orientation features to salience was nearly abolished, whereas contributions of motion, intensity, and color features were preserved. Control experiments employing laboratory stimuli confirmed the free-viewing finding that lesioned monkeys retained color sensitivity. Our results show that attention guidance over complex natural scenes is preserved in the absence of V1, thereby directly challenging theories and models that crucially depend on V1 to compute the low-level visual features that guide attention. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  15. "Making Learning Easy and Enjoyable:" Anna Verona Dorris and the Visual Instruction Movement, 1918-1928

    ERIC Educational Resources Information Center

    Johnson, Wendell G.

    2008-01-01

    The visual instruction movement was a constituent part of the field of visual education, which began in the early 1900s. With the further development of sound films and radio, it became audiovisual education; by the 1950s the field was known as instructional technology and today is often labeled educational technology (Butler, 1995). Anna Verona…

  16. Language-Mediated Eye Movements in the Absence of a Visual World: The "Blank Screen Paradigm"

    ERIC Educational Resources Information Center

    Altmann, Gerry T. M.

    2004-01-01

    The "visual world paradigm" typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding…

  17. Visual search for verbal material in patients with obsessive-compulsive disorder.

    PubMed

    Botta, Fabiano; Vibert, Nicolas; Harika-Germaneau, Ghina; Frasca, Mickaël; Rigalleau, François; Fakra, Eric; Ros, Christine; Rouet, Jean-François; Ferreri, Florian; Jaafari, Nematollah

    2018-06-01

    This study aimed at investigating attentional mechanisms in obsessive-compulsive disorder (OCD) by analysing how visual search processes are modulated by normal and obsession-related distracting information in OCD patients and whether these modulations differ from those observed in healthy people. OCD patients were asked to search for a target word within distractor words that could be orthographically similar to the target, semantically related to the target, semantically related to the most typical obsessions/compulsions observed in OCD patients, or unrelated to the target. Patients' performance and eye movements were compared with those of individually matched healthy controls. In controls, the distractors that were visually similar to the target mostly captured attention. Conversely, patients' attention was captured equally by all kinds of distractor words, whatever their similarity with the target, except obsession-related distractors that attracted patients' attention less than the other distractors. OCD had a major impact on the mostly subliminal mechanisms that guide attention within the search display, but had much less impact on the distractor rejection processes that take place when a distractor is fixated. Hence, visual search in OCD is characterized by abnormal subliminal, but not supraliminal, processing of obsession-related information and by an impaired ability to inhibit task-irrelevant inputs. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Fixational Eye Movements in the Earliest Stage of Metazoan Evolution

    PubMed Central

    Bielecki, Jan; Høeg, Jens T.; Garm, Anders

    2013-01-01

    All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur. PMID:23776673

  19. Fixational eye movements in the earliest stage of metazoan evolution.

    PubMed

    Bielecki, Jan; Høeg, Jens T; Garm, Anders

    2013-01-01

    All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.

  20. The Association Between Visual Assessment of Quality of Movement and Three-Dimensional Analysis of Pelvis, Hip, and Knee Kinematics During a Lateral Step Down Test.

    PubMed

    Rabin, Alon; Portnoy, Sigal; Kozol, Zvi

    2016-11-01

    Rabin, A, Portnoy, S, and Kozol, Z. The association between visual assessment of quality of movement and three-dimensional analysis of pelvis, hip, and knee kinematics during a lateral step down test. J Strength Cond Res 30(11): 3204-3211, 2016-Altered movement patterns including contralateral pelvic drop, increased hip adduction, knee abduction, and external rotation have been previously implicated in several lower extremity pathologies. Although various methods exist for assessing movement patterns, real-time visual observation is the most readily available method. The purpose of this study was to determine whether differing visual ratings of trunk, pelvis, and knee alignment, as well as overall quality of movement, are associated with differences in 3-dimensional trunk, pelvis, hip, or knee kinematics during a lateral step down test. Trunk, pelvis, and knee alignment of 30 healthy participants performing the lateral step down were visually rated as "good" or "faulty" based on previously established criteria. An additional categorization of overall quality of movement as either good or moderate was performed based on the aggregate score of each individual rating criterion. Three-dimensional motion analysis of trunk, pelvis, hip, and knee kinematics was simultaneously performed. A faulty pelvis alignment displayed a greater peak contralateral pelvic drop (effect size [ES], 1.65; p < 0.01) and a greater peak hip adduction (ES: 1.04, p = 0.01) compared with participants with a good pelvis alignment. Participants with a faulty knee alignment displayed greater peak knee external rotation compared with participants with a good knee alignment (ES, 0.78; p = 0.02). Participants with an overall moderate quality of movement displayed increased peak contralateral pelvic drop (ES, 1.07; p = 0.01) and peak knee external rotation (ES, 0.72; p = 0.04) compared with those with an overall good quality of movement. Visual rating of quality of movement during a lateral step down test, as performed by an experienced physical therapist, is associated with differences in several kinematics previously implicated in various pathologies.

  1. Both hand position and movement direction modulate visual attention

    PubMed Central

    Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.

    2013-01-01

    The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288

  2. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  3. Eye-Hand Synergy and Intermittent Behaviors during Target-Directed Tracking with Visual and Non-visual Information

    PubMed Central

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2012-01-01

    Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498

  4. Effect of light on the activity of motor cortex neurons during locomotion

    PubMed Central

    Armer, Madison C.; Nilaweera, Wijitha U.; Rivers, Trevor J.; Dasgupta, Namrata M.; Beloozerova, Irina N.

    2013-01-01

    The motor cortex plays a critical role in accurate visually guided movements such as reaching and target stepping. However, the manner in which vision influences the movement-related activity of neurons in the motor cortex is not well understood. In this study we have investigated how the locomotion-related activity of neurons in the motor cortex is modified when subjects switch between walking in the darkness and in light. Three adult cats were trained to walk through corridors of an experimental chamber for a food reward. On randomly selected trials, lights were extinguished for approximately four seconds when the cat was in a straight portion of the chamber's corridor. Discharges of 146 neurons from layer V of the motor cortex, including 51 pyramidal tract cells (PTNs), were recorded and compared between light and dark conditions. It was found that while cats’ movements during locomotion in light and darkness were similar (as judged from the analysis of three-dimensional limb kinematics and the activity of limb muscles), the firing behavior of 49% (71/146) of neurons was different between the two walking conditions. This included differences in the mean discharge rate (19%, 28/146 of neurons), depth of stride-related frequency modulation (24%, 32/131), duration of the period of elevated firing ([PEF], 19%, 25/131), and number of PEFs among stride-related neurons (26%, 34/131). 20% of responding neurons exhibited more than one type of change. We conclude that visual input plays a very significant role in determining neuronal activity in the motor cortex during locomotion by altering one, or occasionally multiple, parameters of locomotion-related discharges of its neurons. PMID:23680161

  5. Effect of light on the activity of motor cortex neurons during locomotion.

    PubMed

    Armer, Madison C; Nilaweera, Wijitha U; Rivers, Trevor J; Dasgupta, Namrata M; Beloozerova, Irina N

    2013-08-01

    The motor cortex plays a critical role in accurate visually guided movements such as reaching and target stepping. However, the manner in which vision influences the movement-related activity of neurons in the motor cortex is not well understood. In this study we have investigated how the locomotion-related activity of neurons in the motor cortex is modified when subjects switch between walking in the darkness and in light. Three adult cats were trained to walk through corridors of an experimental chamber for a food reward. On randomly selected trials, lights were extinguished for approximately 4s when the cat was in a straight portion of the chamber's corridor. Discharges of 146 neurons from layer V of the motor cortex, including 51 pyramidal tract cells (PTNs), were recorded and compared between light and dark conditions. It was found that while cats' movements during locomotion in light and darkness were similar (as judged from the analysis of three-dimensional limb kinematics and the activity of limb muscles), the firing behavior of 49% (71/146) of neurons was different between the two walking conditions. This included differences in the mean discharge rate (19%, 28/146 of neurons), depth of stride-related frequency modulation (24%, 32/131), duration of the period of elevated firing ([PEF], 19%, 25/131), and number of PEFs among stride-related neurons (26%, 34/131). 20% of responding neurons exhibited more than one type of change. We conclude that visual input plays a very significant role in determining neuronal activity in the motor cortex during locomotion by altering one, or occasionally multiple, parameters of locomotion-related discharges of its neurons. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  6. The influence of clutter on real-world scene search: evidence from search efficiency and eye movements.

    PubMed

    Henderson, John M; Chanceaux, Myriam; Smith, Tim J

    2009-01-23

    We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.

  7. Visual cues that are effective for contextual saccade adaptation

    PubMed Central

    Azadi, Reza

    2014-01-01

    The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. PMID:24647429

  8. Distinct eye movement patterns enhance dynamic visual acuity.

    PubMed

    Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.

  9. Distinct eye movement patterns enhance dynamic visual acuity

    PubMed Central

    Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157

  10. Subjective Estimation of Task Time and Task Difficulty of Simple Movement Tasks.

    PubMed

    Chan, Alan H S; Hoffmann, Errol R

    2017-01-01

    It has been demonstrated in previous work that the same neural structures are used for both imagined and real movements. To provide a strong test of the similarity of imagined and actual movement times, 4 simple movement tasks were used to determine the relationship between estimated task time and actual movement time. The tasks were single-component visually controlled movements, 2-component visually controlled, low index of difficulty (ID) moves and pin-to-hole transfer movements. For each task there was good correspondence between the mean estimated times and actual movement times. In all cases, the same factors determined the actual and estimated movement times: the amplitudes of movement and the IDs of the component movements, however the contribution of each of these variables differed for the imagined and real tasks. Generally, the standard deviations of the estimated times were linearly related to the estimated time values. Overall, the data provide strong evidence for the same neural structures being used for both imagined and actual movements.

  11. Visual Outcomes After LASIK Using Topography-Guided vs Wavefront-Guided Customized Ablation Systems.

    PubMed

    Toda, Ikuko; Ide, Takeshi; Fukumoto, Teruki; Tsubota, Kazuo

    2016-11-01

    To evaluate the visual performance of two customized ablation systems (wavefront-guided ablation and topography-guided ablation) in LASIK. In this prospective, randomized clinical study, 68 eyes of 35 patients undergoing LASIK were enrolled. Patients were randomly assigned to wavefront-guided ablation using the iDesign aberrometer and STAR S4 IR Excimer Laser system (Abbott Medical Optics, Inc., Santa Ana, CA) (wavefront-guided group; 32 eyes of 16 patients; age: 29.0 ± 7.3 years) or topography-guided ablation using the OPD-Scan aberrometer and EC-5000 CXII excimer laser system (NIDEK, Tokyo, Japan) (topography-guided group; 36 eyes of 19 patients; age: 36.1 ± 9.6 years). Preoperative manifest refraction was -4.92 ± 1.95 diopters (D) in the wavefront-guided group and -4.44 ± 1.98 D in the topography-guided group. Visual function and subjective symptoms were compared between groups before and 1 and 3 months after LASIK. Of seven subjective symptoms evaluated, four were significantly milder in the wavefront-guided group at 3 months. Contrast sensitivity with glare off at low spatial frequencies (6.3° and 4°) was significantly higher in the wavefront-guided group. Uncorrected and corrected distance visual acuity, manifest refraction, and higher order aberrations measured by OPD-Scan and iDesign were not significantly different between the two groups at 1 and 3 months after LASIK. Both customized ablation systems used in LASIK achieved excellent results in predictability and visual function. The wavefront-guided ablation system may have some advantages in the quality of vision. It may be important to select the appropriate system depending on eye conditions such as the pattern of total and corneal higher order aberrations. [J Refract Surg. 2016;32(11):727-732.]. Copyright 2016, SLACK Incorporated.

  12. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  13. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  14. Evidence for multisensory spatial-to-motor transformations in aiming movements of children.

    PubMed

    King, Bradley R; Kagerer, Florian A; Contreras-Vidal, Jose L; Clark, Jane E

    2009-01-01

    The extant developmental literature investigating age-related differences in the execution of aiming movements has predominantly focused on visuomotor coordination, despite the fact that additional sensory modalities, such as audition and somatosensation, may contribute to motor planning, execution, and learning. The current study investigated the execution of aiming movements toward both visual and acoustic stimuli. In addition, we examined the interaction between visuomotor and auditory-motor coordination as 5- to 10-yr-old participants executed aiming movements to visual and acoustic stimuli before and after exposure to a visuomotor rotation. Children in all age groups demonstrated significant improvement in performance under the visuomotor perturbation, as indicated by decreased initial directional and root mean squared errors. Moreover, children in all age groups demonstrated significant visual aftereffects during the postexposure phase, suggesting a successful update of their spatial-to-motor transformations. Interestingly, these updated spatial-to-motor transformations also influenced auditory-motor performance, as indicated by distorted movement trajectories during the auditory postexposure phase. The distorted trajectories were present during auditory postexposure even though the auditory-motor relationship was not manipulated. Results suggest that by the age of 5 yr, children have developed a multisensory spatial-to-motor transformation for the execution of aiming movements toward both visual and acoustic targets.

  15. How vision and movement combine in the hippocampal place code.

    PubMed

    Chen, Guifen; King, John A; Burgess, Neil; O'Keefe, John

    2013-01-02

    How do external environmental and internal movement-related information combine to tell us where we are? We examined the neural representation of environmental location provided by hippocampal place cells while mice navigated a virtual reality environment in which both types of information could be manipulated. Extracellular recordings were made from region CA1 of head-fixed mice navigating a virtual linear track and running in a similar real environment. Despite the absence of vestibular motion signals, normal place cell firing and theta rhythmicity were found. Visual information alone was sufficient for localized firing in 25% of place cells and to maintain a local field potential theta rhythm (but with significantly reduced power). Additional movement-related information was required for normally localized firing by the remaining 75% of place cells. Trials in which movement and visual information were put into conflict showed that they combined nonlinearly to control firing location, and that the relative influence of movement versus visual information varied widely across place cells. However, within this heterogeneity, the behavior of fully half of the place cells conformed to a model of path integration in which the presence of visual cues at the start of each run together with subsequent movement-related updating of position was sufficient to maintain normal fields.

  16. Hand placement near the visual stimulus improves orientation selectivity in V2 neurons

    PubMed Central

    Sergio, Lauren E.; Crawford, J. Douglas; Fallah, Mazyar

    2015-01-01

    Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements. PMID:25717165

  17. Vision in the natural world.

    PubMed

    Hayhoe, Mary M; Rothkopf, Constantin A

    2011-03-01

    Historically, the study of visual perception has followed a reductionist strategy, with the goal of understanding complex visually guided behavior by separate analysis of its elemental components. Recent developments in monitoring behavior, such as measurement of eye movements in unconstrained observers, have allowed investigation of the use of vision in the natural world. This has led to a variety of insights that would be difficult to achieve in more constrained experimental contexts. In general, it shifts the focus of vision away from the properties of the stimulus toward a consideration of the behavioral goals of the observer. It appears that behavioral goals are a critical factor in controlling the acquisition of visual information from the world. This insight has been accompanied by a growing understanding of the importance of reward in modulating the underlying neural mechanisms and by theoretical developments using reinforcement learning models of complex behavior. These developments provide us with the tools to understanding how tasks are represented in the brain, and how they control acquisition of information through use of gaze. WIREs Cogni Sci 2011 2 158-166 DOI: 10.1002/wcs.113 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  18. Honeybees as a model for the study of visually guided flight, navigation, and biologically inspired robotics.

    PubMed

    Srinivasan, Mandyam V

    2011-04-01

    Research over the past century has revealed the impressive capacities of the honeybee, Apis mellifera, in relation to visual perception, flight guidance, navigation, and learning and memory. These observations, coupled with the relative ease with which these creatures can be trained, and the relative simplicity of their nervous systems, have made honeybees an attractive model in which to pursue general principles of sensorimotor function in a variety of contexts, many of which pertain not just to honeybees, but several other animal species, including humans. This review begins by describing the principles of visual guidance that underlie perception of the world in three dimensions, obstacle avoidance, control of flight speed, and orchestrating smooth landings. We then consider how navigation over long distances is accomplished, with particular reference to how bees use information from the celestial compass to determine their flight bearing, and information from the movement of the environment in their eyes to gauge how far they have flown. Finally, we illustrate how some of the principles gleaned from these studies are now being used to design novel, biologically inspired algorithms for the guidance of unmanned aerial vehicles.

  19. Destabilizing effects of visual environment motions simulating eye movements or head movements

    NASA Technical Reports Server (NTRS)

    White, Keith D.; Shuman, D.; Krantz, J. H.; Woods, C. B.; Kuntz, L. A.

    1991-01-01

    In the present paper, we explore effects on the human of exposure to a visual virtual environment which has been enslaved to simulate the human user's head movements or eye movements. Specifically, we have studied the capacity of our experimental subjects to maintain stable spatial orientation in the context of moving their entire visible surroundings by using the parameters of the subjects' natural movements. Our index of the subjects' spatial orientation was the extent of involuntary sways of the body while attempting to stand still, as measured by translations and rotations of the head. We also observed, informally, their symptoms of motion sickness.

  20. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.

  1. Changes in cortical, cerebellar and basal ganglia representation after comprehensive long term unilateral hand motor training.

    PubMed

    Walz, A D; Doppl, K; Kaza, E; Roschka, S; Platz, T; Lotze, M

    2015-02-01

    We were interested in motor performance gain after unilateral hand motor training and associated changes of cerebral and cerebellar movement representation tested with functional magnetic resonance imaging (fMRI) before and after training. Therefore, we trained the left hand of strongly right-handed healthy participants with a comprehensive training (arm ability training, AAT) over two weeks. Motor performance was tested for the trained and non-trained hand before and after the training period. Functional imaging was performed for the trained and the non-trained hand separately and comprised force modulation with the fist, sequential finger movements and a fast writing task. After the training period the performance gain of tapping movements was comparable for both hand sides, whereas the motor performance for writing showed a higher training effect for the trained hand. fMRI showed a reduction of activation in supplementary motor, dorsolateral prefrontal cortex, parietal cortical areas and lateral cerebellar areas during sequential finger movements over time. During left hand writing lateral cerebellar hemisphere also showed reduced activation, while activation of the anterior cerebellar hemisphere was increased. An initially high anterior cerebellar activation magnitude was a predictive value for high training outcome of finger tapping and visual guided movements. During the force modulation task we found increased activation in the striate. Overall, a comprehensive long-term training of the less skillful hand in healthy participants resulted in relevant motor performance improvements, as well as an intermanual learning transfer differently pronounced for the type of movement tested. Whereas cortical motor area activation decreased over time, cerebellar anterior hemisphere and striatum activity seem to represent increasing resources after long-term motor training. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Does visual feedback during walking result in similar improvements in trunk control for young and older healthy adults?

    PubMed

    Anson, Eric; Rosenberg, Russell; Agada, Peter; Kiemel, Tim; Jeka, John

    2013-11-26

    Most current applications of visual feedback to improve postural control are limited to a fixed base of support and produce mixed results regarding improved postural control and transfer to functional tasks. Currently there are few options available to provide visual feedback regarding trunk motion while walking. We have developed a low cost platform to provide visual feedback of trunk motion during walking. Here we investigated whether augmented visual position feedback would reduce trunk movement variability in both young and older healthy adults. The subjects who participated were 10 young and 10 older adults. Subjects walked on a treadmill under conditions of visual position feedback and no feedback. The visual feedback consisted of anterior-posterior (AP) and medial-lateral (ML) position of the subject's trunk during treadmill walking. Fourier transforms of the AP and ML trunk kinematics were used to calculate power spectral densities which were integrated as frequency bins "below the gait cycle" and "gait cycle and above" for analysis purposes. Visual feedback reduced movement power at very low frequencies for lumbar and neck translation but not trunk angle in both age groups. At very low frequencies of body movement, older adults had equivalent levels of movement variability with feedback as young adults without feedback. Lower variability was specific to translational (not angular) trunk movement. Visual feedback did not affect any of the measured lower extremity gait pattern characteristics of either group, suggesting that changes were not invoked by a different gait pattern. Reduced translational variability while walking on the treadmill reflects more precise control maintaining a central position on the treadmill. Such feedback may provide an important technique to augment rehabilitation to minimize body translation while walking. Individuals with poor balance during walking may benefit from this type of training to enhance path consistency during over-ground locomotion.

  3. Rehearsal in serial memory for visual-spatial information: evidence from eye movements.

    PubMed

    Tremblay, Sébastien; Saint-Aubin, Jean; Jalbert, Annie

    2006-06-01

    It is well established that rote rehearsal plays a key role in serial memory for lists of verbal items. Although a great deal of research has informed us about the nature of verbal rehearsal, much less attention has been devoted to rehearsal in serial memory for visual-spatial information. By using the dot task--a visual-spatial analogue of the classical verbal serial recall task--with delayed recall, performance and eyetracking data were recorded in order to establish whether visual-spatial rehearsal could be evidenced by eye movement. The use of eye movement as a form of rehearsal is detectable (Experiment 1), and it seems to contribute to serial memory performance over and above rehearsal based on shifts of spatial attention (Experiments 1 and 2).

  4. USING THE SELECTIVE FUNCTIONAL MOVEMENT ASSESSMENT AND REGIONAL INTERDEPENDENCE THEORY TO GUIDE TREATMENT OF AN ATHLETE WITH BACK PAIN: A CASE REPORT.

    PubMed

    Goshtigian, Gabriella R; Swanson, Brian T

    2016-08-01

    Despite the multidirectional quality of human movement, common measurement procedures used in physical therapy examination are often uni-planar and lack the ability to assess functional complexities involved in daily activities. Currently, there is no widely accepted, validated standard to assess movement quality. The Selective Functional Movement Assessment (SFMA) is one possible system to objectively assess complex functional movements. The purpose of this case report is to illustrate the application of the SFMA as a guide to the examination, evaluation, and management of a patient with non-specific low back pain (LBP). An adolescent male athlete with LBP was evaluated using the SFMA. It was determined that the patient had mobility limitations remote to the site of pain (thoracic spine and hips) which therapists hypothesized were leading to compensatory hypermobility at the lumbar spine. Guided by the SFMA, initial interventions focused on local (lumbar) symptom management, progressing to remote mobility deficits, and then addressing the local stability deficit. All movement patterns became functional/non-painful except the right upper extremity medial rotation-extension pattern. At discharge, the patient demonstrated increased soft tissue extensibility of hip musculature and joint mobility of the thoracic spine along with normalization of lumbopelvic motor control. Improvements in pain exceeded minimal clinically important differences, from 2-7/10 on a verbal analog scale at initial exam to 0-2/10 at discharge. Developing and progressing a plan of care for an otherwise healthy and active adolescent with non-specific LBP can be challenging. Human movement is a collaborative effort of muscle groups that are interdependent; the use of a movement-based assessment model can help identify weak links affecting overall function. The SFMA helped guide therapists to dysfunctional movements not seen with more conventional examination procedures. Level 4.

  5. Congenitally blind individuals rapidly adapt to coriolis force perturbations of their reaching movements

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Lackner, J. R.

    2000-01-01

    Reaching movements made to visual targets in a rotating room are initially deviated in path and endpoint in the direction of transient Coriolis forces generated by the motion of the arm relative to the rotating environment. With additional reaches, movements become progressively straighter and more accurate. Such adaptation can occur even in the absence of visual feedback about movement progression or terminus. Here we examined whether congenitally blind and sighted subjects without visual feedback would demonstrate adaptation to Coriolis forces when they pointed to a haptically specified target location. Subjects were tested pre-, per-, and postrotation at 10 rpm counterclockwise. Reaching to straight ahead targets prerotation, both groups exhibited slightly curved paths. Per-rotation, both groups showed large initial deviations of movement path and curvature but within 12 reaches on average had returned to prerotation curvature levels and endpoints. Postrotation, both groups showed mirror image patterns of curvature and endpoint to the per-rotation pattern. The groups did not differ significantly on any of the performance measures. These results provide compelling evidence that motor adaptation to Coriolis perturbations can be achieved on the basis of proprioceptive, somatosensory, and motor information in the complete absence of visual experience.

  6. A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder.

    PubMed

    Liberati, Alessio; Fadda, Roberta; Doneddu, Giuseppe; Congiu, Sara; Javarone, Marco A; Striano, Tricia; Chessa, Alessandro

    2017-08-01

    This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.

  7. Ungulates rely less on visual cues, but more on adapting movement behaviour, when searching for forage.

    PubMed

    Venter, Jan A; Prins, Herbert H T; Mashanova, Alla; Slotow, Rob

    2017-01-01

    Finding suitable forage patches in a heterogeneous landscape, where patches change dynamically both spatially and temporally could be challenging to large herbivores, especially if they have no a priori knowledge of the location of the patches. We tested whether three large grazing herbivores with a variety of different traits improve their efficiency when foraging at a heterogeneous habitat patch scale by using visual cues to gain a priori knowledge about potential higher value foraging patches. For each species (zebra ( Equus burchelli ), red hartebeest ( Alcelaphus buselaphus subspecies camaa ) and eland ( Tragelaphus oryx )), we used step lengths and directionality of movement to infer whether they were using visual cues to find suitable forage patches at a habitat patch scale. Step lengths were significantly longer for all species when moving to non-visible patches than to visible patches, but all movements showed little directionality. Of the three species, zebra movements were the most directional. Red hartebeest had the shortest step lengths and zebra the longest. We conclude that these large grazing herbivores may not exclusively use visual cues when foraging at a habitat patch scale, but would rather adapt their movement behaviour, mainly step length, to the heterogeneity of the specific landscape.

  8. Gravity-dependent estimates of object mass underlie the generation of motor commands for horizontal limb movements.

    PubMed

    Crevecoeur, F; McIntyre, J; Thonnard, J-L; Lefèvre, P

    2014-07-15

    Moving requires handling gravitational and inertial constraints pulling on our body and on the objects that we manipulate. Although previous work emphasized that the brain uses internal models of each type of mechanical load, little is known about their interaction during motor planning and execution. In this report, we examine visually guided reaching movements in the horizontal plane performed by naive participants exposed to changes in gravity during parabolic flight. This approach allowed us to isolate the effect of gravity because the environmental dynamics along the horizontal axis remained unchanged. We show that gravity has a direct effect on movement kinematics, with faster movements observed after transitions from normal gravity to hypergravity (1.8g), followed by significant movement slowing after the transition from hypergravity to zero gravity. We recorded finger forces applied on an object held in precision grip and found that the coupling between grip force and inertial loads displayed a similar effect, with an increase in grip force modulation gain under hypergravity followed by a reduction of modulation gain after entering the zero-gravity environment. We present a computational model to illustrate that these effects are compatible with the hypothesis that participants partially attribute changes in weight to changes in mass and scale incorrectly their motor commands with changes in gravity. These results highlight a rather direct internal mapping between the force generated during stationary holding against gravity and the estimation of inertial loads that limb and hand motor commands must overcome. Copyright © 2014 the American Physiological Society.

  9. Eye movements reveal sexually dimorphic deficits in children with fetal alcohol spectrum disorder

    PubMed Central

    Paolozza, Angelina; Munn, Rebecca; Munoz, Douglas P.; Reynolds, James N.

    2015-01-01

    Background: We examined the accuracy and characteristics of saccadic eye movements in children with fetal alcohol spectrum disorder (FASD) compared with typically developing control children. Previous studies have found that children with FASD produce saccades that are quantifiably different from controls. Additionally, animal studies have found sex-based differences for behavioral effects after prenatal alcohol exposure. Therefore, we hypothesized that eye movement measures will show sexually dimorphic results. Methods: Children (aged 5–18 years) with FASD (n = 71) and typically developing controls (n = 113) performed a visually-guided saccade task. Saccade metrics and behavior were analyzed for sex and group differences. Results: Female control participants had greater amplitude saccades than control males or females with FASD. Accuracy was significantly poorer in the FASD group, especially in males, which introduced significantly greater variability in the data. Therefore, we conducted additional analyses including only those trials in which the first saccade successfully reached the target within a ± 1° window. In this restricted amplitude dataset, the females with FASD made saccades with significantly lower velocity and longer duration, whereas the males with FASD did not differ from the control group. Additionally, the mean and peak deceleration were selectively decreased in the females with FASD. Conclusions: These data support the hypothesis that children with FASD exhibit specific deficits in eye movement control and sensory-motor integration associated with cerebellar and/or brain stem circuits. Moreover, prenatal alcohol exposure may have a sexually dimorphic impact on eye movement metrics, with males and females exhibiting differential patterns of deficit. PMID:25814922

  10. Cognitive Control Network Contributions to Memory-Guided Visual Attention.

    PubMed

    Rosen, Maya L; Stern, Chantal E; Michalka, Samantha W; Devaney, Kathryn J; Somers, David C

    2016-05-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network(CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  11. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  12. Facilitating Understanding of Movements in Dynamic Visualizations: An Embodied Perspective

    ERIC Educational Resources Information Center

    de Koning, Bjorn B.; Tabbers, Huib K.

    2011-01-01

    Learners studying mechanical or technical processes via dynamic visualizations often fail to build an accurate mental representation of the system's movements. Based on embodied theories of cognition assuming that action, perception, and cognition are closely intertwined, this paper proposes that the learning effectiveness of dynamic…

  13. Visual tuning and metrical perception of realistic point-light dance movements.

    PubMed

    Su, Yi-Huang

    2016-03-07

    Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals' preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory 'subdivision effect', suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception.

  14. The selective disruption of spatial working memory by eye movements

    PubMed Central

    Postle, Bradley R.; Idzikowski, Christopher; Sala, Sergio Della; Logie, Robert H.; Baddeley, Alan D.

    2005-01-01

    In the late 1970s/early 1980s, Baddeley and colleagues conducted a series of experiments investigating the role of eye movements in visual working memory. Although only described briefly in a book (Baddeley, 1986), these studies have influenced a remarkable number of empirical and theoretical developments in fields ranging from experimental psychology to human neuropsychology to nonhuman primate electrophysiology. This paper presents, in full detail, three critical studies from this series, together with a recently performed study that includes a level of eye movement measurement and control that was not available for the older studies. Together, the results demonstrate several facts about the sensitivity of visuospatial working memory to eye movements. First, it is eye movement control, not movement per se, that produces the disruptive effects. Second, these effects are limited to working memory for locations, and do not generalize to visual working memory for shapes. Third, they can be isolated to the storage/maintenance components of working memory (e.g., to the delay period of the delayed-recognition task). These facts have important implications for models of visual working memory. PMID:16556561

  15. Visual short-term memory guides infants' visual attention.

    PubMed

    Mitsven, Samantha G; Cantrell, Lisa M; Luck, Steven J; Oakes, Lisa M

    2018-08-01

    Adults' visual attention is guided by the contents of visual short-term memory (VSTM). Here we asked whether 10-month-old infants' (N = 41) visual attention is also guided by the information stored in VSTM. In two experiments, we modified the one-shot change detection task (Oakes, Baumgartner, Barrett, Messenger, & Luck, 2013) to create a simplified cued visual search task to ask how information stored in VSTM influences where infants look. A single sample item (e.g., a colored circle) was presented at fixation for 500 ms, followed by a brief (300 ms) retention interval and then a test array consisting of two items, one on each side of fixation. One item in the test array matched the sample stimulus and the other did not. Infants were more likely to look at the non-matching item than at the matching item, demonstrating that the information stored rapidly in VSTM guided subsequent looking behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. FUNdamental Movement in Early Childhood.

    ERIC Educational Resources Information Center

    Campbell, Linley

    2001-01-01

    Noting that the development of fundamental movement skills is basic to children's motor development, this booklet provides a guide for early childhood educators in planning movement experiences for children between 4 and 8 years. The booklet introduces a wide variety of appropriate practices to promote movement skill acquisition and increased…

  17. Visual cues that are effective for contextual saccade adaptation.

    PubMed

    Azadi, Reza; Harwood, Mark R

    2014-06-01

    The accuracy of saccades, as maintained by saccade adaptation, has been shown to be context dependent: able to have different amplitude movements to the same retinal displacement dependent on motor contexts such as orbital starting location. There is conflicting evidence as to whether purely visual cues also effect contextual saccade adaptation and, if so, what function this might serve. We tested what visual cues might evoke contextual adaptation. Over 5 experiments, 78 naive subjects made saccades to circularly moving targets, which stepped outward or inward during the saccade depending on target movement direction, speed, or color and shape. To test if the movement or context postsaccade were critical, we stopped the postsaccade target motion (experiment 4) or neutralized the contexts by equating postsaccade target speed to an intermediate value (experiment 5). We found contextual adaptation in all conditions except those defined by color and shape. We conclude that some, but not all, visual cues before the saccade are sufficient for contextual adaptation. We conjecture that this visual contextuality functions to allow for different motor states for different coordinated movement patterns, such as coordinated saccade and pursuit motor planning. Copyright © 2014 the American Physiological Society.

  18. Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns

    PubMed Central

    de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.

    2016-01-01

    The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633

  19. Predictive and tempo-flexible synchronization to a visual metronome in monkeys.

    PubMed

    Takeya, Ryuji; Kameda, Masashi; Patel, Aniruddh D; Tanaka, Masaki

    2017-07-21

    Predictive and tempo-flexible synchronization to an auditory beat is a fundamental component of human music. To date, only certain vocal learning species show this behaviour spontaneously. Prior research training macaques (vocal non-learners) to tap to an auditory or visual metronome found their movements to be largely reactive, not predictive. Does this reflect the lack of capacity for predictive synchronization in monkeys, or lack of motivation to exhibit this behaviour? To discriminate these possibilities, we trained monkeys to make synchronized eye movements to a visual metronome. We found that monkeys could generate predictive saccades synchronized to periodic visual stimuli when an immediate reward was given for every predictive movement. This behaviour generalized to novel tempi, and the monkeys could maintain the tempo internally. Furthermore, monkeys could flexibly switch from predictive to reactive saccades when a reward was given for each reactive response. In contrast, when humans were asked to make a sequence of reactive saccades to a visual metronome, they often unintentionally generated predictive movements. These results suggest that even vocal non-learners may have the capacity for predictive and tempo-flexible synchronization to a beat, but that only certain vocal learning species are intrinsically motivated to do it.

  20. Bending it like Beckham: how to visually fool the goalkeeper.

    PubMed

    Dessing, Joost C; Craig, Cathy M

    2010-10-06

    As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer.

  1. The effect of saccade metrics on the corollary discharge contribution to perceived eye location

    PubMed Central

    Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.

    2015-01-01

    Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955

  2. Lateral occipitotemporal cortex (LOTC) activity is greatest while viewing dance compared to visualization and movement: learning and expertise effects.

    PubMed

    Di Nota, Paula M; Levkov, Gabriella; Bar, Rachel; DeSouza, Joseph F X

    2016-07-01

    The lateral occipitotemporal cortex (LOTC) is comprised of subregions selectively activated by images of human bodies (extrastriate body area, EBA), objects (lateral occipital complex, LO), and motion (MT+). However, their role in motor imagery and movement processing is unclear, as are the influences of learning and expertise on its recruitment. The purpose of our study was to examine putative changes in LOTC activation during action processing following motor learning of novel choreography in professional ballet dancers. Subjects were scanned with functional magnetic resonance imaging up to four times over 34 weeks and performed four tasks: viewing and visualizing a newly learned ballet dance, visualizing a dance that was not being learned, and movement of the foot. EBA, LO, and MT+ were activated most while viewing dance compared to visualization and movement. Significant increases in activation were observed over time in left LO only during visualization of the unlearned dance, and all subregions were activated bilaterally during the viewing task after 34 weeks of performance, suggesting learning-induced plasticity. Finally, we provide novel evidence for modulation of EBA with dance experience during the motor task, with significant activation elicited in a comparison group of novice dancers only. These results provide a composite of LOTC activation during action processing of newly learned ballet choreography and movement of the foot. The role of these areas is confirmed as primarily subserving observation of complex sequences of whole-body movement, with new evidence for modification by experience and over the course of real world ballet learning.

  3. Bending It Like Beckham: How to Visually Fool the Goalkeeper

    PubMed Central

    2010-01-01

    Background As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Methodology/Principal Findings Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. Conclusions While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer. PMID:20949130

  4. The difference in visuomotor feedback velocity control during spiral drawing between Parkinson's disease and essential tremor.

    PubMed

    Chen, Kai-Hsiang; Lin, Po-Chieh; Yang, Bing-Shiang; Chen, Yu-Jung

    2018-06-01

    In a spiral task, the accuracy of the spiral trajectory, which is affected by tracing or tracking ability, differs between patients with Parkinson's disease (PD) and essential tremor (ET). However, not many studies have analyzed velocity differences between the groups during this task. This study aimed to examine differences between the groups related to this characteristic using a tablet. Fourteen PD, 12 ET, and 12 control group participants performed two tasks: tracing a given spiral (T1) and following a guiding point (T2). A digitized tablet was used to record movements and trajectory. Effects of direct visual feedback on intergroup and intragroup velocity were measured. Although PD patients had a significantly lower T1 velocity than the control group (p < 0.05), they could match the velocity of the guiding point (3.0 cm/s) in T2. There was no significant difference in the average T1 velocity between ET and the control groups (p = 0.26); however, the T2 velocity of ET patients was significantly higher than the control group (p < 0.05). They were also unable to adjust the velocity to match the guiding point, indicating that ET patients have a poorer ability to follow dynamic guidance. When both groups of patients have similar action tremor severity, their ability to follow dynamic guidance was still significantly different. Our study combined visual feedback with spiral drawing and demonstrated differences in the following-velocity distribution in PD and ET. This method may be used to distinguish the tremor presentation of both diseases, and thus, provide accurate diagnosis.

  5. Spatial updating in human parietal cortex

    NASA Technical Reports Server (NTRS)

    Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.

    2003-01-01

    Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.

  6. Development of Four Dimensional Human Model that Enables Deformation of Skin, Organs and Blood Vessel System During Body Movement - Visualizing Movements of the Musculoskeletal System.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto

    2016-01-01

    We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.

  7. Effects of Visual and Verbal Interaction on Unintentional Interpersonal Coordination

    ERIC Educational Resources Information Center

    Richardson, Michael J.; Marsh, Kerry L.; Schmidt, R. C.

    2005-01-01

    Previous research has demonstrated that people's movements can become unintentionally coordinated during interpersonal interaction. The current study sought to uncover the degree to which visual and verbal (conversation) interaction constrains and organizes the rhythmic limb movements of coactors. Two experiments were conducted in which pairs of…

  8. Stereotyped Movements among Children Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Gal, Eynat; Dyck, Murray J.

    2009-01-01

    Does the severity of visual impairment affect the prevalence and severity of stereotyped movements? In this study, children who were blind or had low vision, half of whom had intellectual disabilities, were assessed. The results revealed that blindness and global delays were associated with more sensory processing dysfunction and more stereotyped…

  9. Rhesus Monkeys Behave As If They Perceive the Duncker Illusion

    PubMed Central

    Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.

    2008-01-01

    The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233

  10. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  11. Electrical Microstimulation of the Pulvinar Biases Saccade Choices and Reaction Times in a Time-Dependent Manner

    PubMed Central

    2017-01-01

    The pulvinar complex is interconnected extensively with brain regions involved in spatial processing and eye movement control. Recent inactivation studies have shown that the dorsal pulvinar (dPul) plays a role in saccade target selection; however, it remains unknown whether it exerts effects on visual processing or at planning/execution stages. We used electrical microstimulation of the dPul while monkeys performed saccade tasks toward instructed and freely chosen targets. Timing of stimulation was varied, starting before, at, or after onset of target(s). Stimulation affected saccade properties and target selection in a time-dependent manner. Stimulation starting before but overlapping with target onset shortened saccadic reaction times (RTs) for ipsiversive (to the stimulation site) target locations, whereas stimulation starting at and after target onset caused systematic delays for both ipsiversive and contraversive locations. Similarly, stimulation starting before the onset of bilateral targets increased ipsiversive target choices, whereas stimulation after target onset increased contraversive choices. Properties of dPul neurons and stimulation effects were consistent with an overall contraversive drive, with varying outcomes contingent upon behavioral demands. RT and choice effects were largely congruent in the visually-guided task, but stimulation during memory-guided saccades, while influencing RTs and errors, did not affect choice behavior. Together, these results show that the dPul plays a primary role in action planning as opposed to visual processing, that it exerts its strongest influence on spatial choices when decision and action are temporally close, and that this choice effect can be dissociated from motor effects on saccade initiation and execution. SIGNIFICANCE STATEMENT Despite a recent surge of interest, the core function of the pulvinar, the largest thalamic complex in primates, remains elusive. This understanding is crucial given the central role of the pulvinar in current theories of integrative brain functions supporting cognition and goal-directed behaviors, but electrophysiological and causal interference studies of dorsal pulvinar (dPul) are rare. Building on our previous studies that pharmacologically suppressed dPul activity for several hours, here we used transient electrical microstimulation at different periods while monkeys performed instructed and choice eye movement tasks, to determine time-specific contributions of pulvinar to saccade generation and decision making. We show that stimulation effects depend on timing and behavioral state and that effects on choices can be dissociated from motor effects. PMID:28119401

  12. Effect of verbal instructions and image size on visual search strategies in basketball free throw shooting.

    PubMed

    Al-Abood, Saleh A; Bennett, Simon J; Hernandez, Francisco Moreno; Ashford, Derek; Davids, Keith

    2002-03-01

    We assessed the effects on basketball free throw performance of two types of verbal directions with an external attentional focus. Novices (n = 16) were pre-tested on free throw performance and assigned to two groups of similar ability (n = 8 in each). Both groups received verbal instructions with an external focus on either movement dynamics (movement form) or movement effects (e.g. ball trajectory relative to basket). The participants also observed a skilled model performing the task on either a small or large screen monitor, to ascertain the effects of visual presentation mode on task performance. After observation of six videotaped trials, all participants were given a post-test. Visual search patterns were monitored during observation and cross-referenced with performance on the pre- and post-test. Group effects were noted for verbal instructions and image size on visual search strategies and free throw performance. The 'movement effects' group saw a significant improvement in outcome scores between the pre-test and post-test. These results supported evidence that this group spent more viewing time on information outside the body than the 'movement dynamics' group. Image size affected both groups equally with more fixations of shorter duration when viewing the small screen. The results support the benefits of instructions when observing a model with an external focus on movement effects, not dynamics.

  13. A unified dynamic neural field model of goal directed eye movements

    NASA Astrophysics Data System (ADS)

    Quinton, J. C.; Goffart, L.

    2018-01-01

    Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.

  14. Chromatic signals control proboscis movements during hovering flight in the hummingbird hawkmoth Macroglossum stellatarum.

    PubMed

    Goyret, Joaquín; Kelber, Almut

    2012-01-01

    Most visual systems are more sensitive to luminance than to colour signals. Animals resolve finer spatial detail and temporal changes through achromatic signals than through chromatic ones. Probably, this explains that detection of small, distant, or moving objects is typically mediated through achromatic signals. Macroglossum stellatarum are fast flying nectarivorous hawkmoths that inspect flowers with their long proboscis while hovering. They can visually control this behaviour using floral markings known as nectar guides. Here, we investigate whether this is mediated by chromatic or achromatic cues. We evaluated proboscis placement, foraging efficiency, and inspection learning of naïve moths foraging on flower models with coloured markings that offered either chromatic, achromatic or both contrasts. Hummingbird hawkmoths could use either achromatic or chromatic signals to inspect models while hovering. We identified three, apparently independent, components controlling proboscis placement: After initial contact, 1) moths directed their probing towards the yellow colour irrespectively of luminance signals, suggesting a dominant role of chromatic signals; and 2) moths tended to probe mainly on the brighter areas of models that offered only achromatic signals. 3) During the establishment of the first contact, naïve moths showed a tendency to direct their proboscis towards the small floral marks independent of their colour or luminance. Moths learned to find nectar faster, but their foraging efficiency depended on the flower model they foraged on. Our results imply that M. stellatarum can perceive small patterns through colour vision. We discuss how the different informational contents of chromatic and luminance signals can be significant for the control of flower inspection, and visually guided behaviours in general.

  15. Hamas between Violence and Pragmatism

    DTIC Science & Technology

    2009-06-01

    Islamic Palestinian state. Nonetheless, as a movement, it has another far more existential objective. Once established, a movement needs to sustain...34 (related by al- Bukhari, Moslem, Abu-Dawood and al-Tarmadhi). F. Followers of Other Religions: The Islamic Resistance Movement Is A Humanistic ...Movement: Article Thirty-One: The Islamic Resistance Movement is a humanistic movement. It takes care of human rights and is guided by Islamic

  16. Visually Guided Avoidance in the Chameleon (Chamaeleo chameleon): Response Patterns and Lateralization

    PubMed Central

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2012-01-01

    The common chameleon, Chamaeleo chameleon, is an arboreal lizard with highly independent, large-amplitude eye movements. In response to a moving threat, a chameleon on a perch responds with distinct avoidance movements that are expressed in its continuous positioning on the side of the perch distal to the threat. We analyzed body-exposure patterns during threat avoidance for evidence of lateralization, that is, asymmetry at the functional/behavioral levels. Chameleons were exposed to a threat approaching horizontally from the left or right, as they held onto a vertical pole that was either wider or narrower than the width of their head, providing, respectively, monocular or binocular viewing of the threat. We found two equal-sized sub-groups, each displaying lateralization of motor responses to a given direction of stimulus approach. Such an anti-symmetrical distribution of lateralization in a population may be indicative of situations in which organisms are regularly exposed to crucial stimuli from all spatial directions. This is because a bimodal distribution of responses to threat in a natural population will reduce the spatial advantage of predators. PMID:22685546

  17. Visually guided avoidance in the chameleon (Chamaeleo chameleon): response patterns and lateralization.

    PubMed

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2012-01-01

    The common chameleon, Chamaeleo chameleon, is an arboreal lizard with highly independent, large-amplitude eye movements. In response to a moving threat, a chameleon on a perch responds with distinct avoidance movements that are expressed in its continuous positioning on the side of the perch distal to the threat. We analyzed body-exposure patterns during threat avoidance for evidence of lateralization, that is, asymmetry at the functional/behavioral levels. Chameleons were exposed to a threat approaching horizontally from the left or right, as they held onto a vertical pole that was either wider or narrower than the width of their head, providing, respectively, monocular or binocular viewing of the threat. We found two equal-sized sub-groups, each displaying lateralization of motor responses to a given direction of stimulus approach. Such an anti-symmetrical distribution of lateralization in a population may be indicative of situations in which organisms are regularly exposed to crucial stimuli from all spatial directions. This is because a bimodal distribution of responses to threat in a natural population will reduce the spatial advantage of predators.

  18. Eye movement control in reading unspaced text: the case of the Japanese script.

    PubMed

    Kajii, N; Nazir, T A; Osaka, N

    2001-09-01

    The present study examines the landing-site distributions of the eyes during natural reading of Japanese script: a script that mixes three different writing systems (Kanji, Hiragana, and Katakana) and that misses regular spacing between words. The results show a clear preference of the eyes to land at the beginning rather than the center of the word. In addition, it was found that the eyes land on Kanji characters more frequently than on Hiragana or Katakana characters. Further analysis for two- and three-character words indicated that the eye's landing-site distribution differs depending on type of the characters in the word: the eyes prefer to land at the word beginning only when the initial character of the word is a Kanji character. For pure Hiragana words, the proportion of initial fixations did not differ between character positions. Thus, as already indicated by Kambe (National Institute of Japanese Language Report 85 (1986) 29), the visual distinctiveness of the three Japanese scripts plays a role in guiding eye movements in reading Japanese.

  19. Prospective versus predictive control in timing of hitting a falling ball.

    PubMed

    Katsumata, Hiromu; Russell, Daniel M

    2012-02-01

    Debate exists as to whether humans use prospective or predictive control to intercept an object falling under gravity (Baurès et al. in Vis Res 47:2982-2991, 2007; Zago et al. in Vis Res 48:1532-1538, 2008). Prospective control involves using continuous information to regulate action. τ, the ratio of the size of the gap to the rate of gap closure, has been proposed as the information used in guiding interceptive actions prospectively (Lee in Ecol Psychol 10:221-250, 1998). This form of control is expected to generate movement modulation, where variability decreases over the course of an action based upon more accurate timing information. In contrast, predictive control assumes that a pre-programmed movement is triggered at an appropriate criterion timing variable. For a falling object it is commonly argued that an internal model of gravitational acceleration is used to predict the motion of the object and determine movement initiation. This form of control predicts fixed duration movements initiated at consistent time-to-contact (TTC), either across conditions (constant criterion operational timing) or within conditions (variable criterion operational timing). The current study sought to test predictive and prospective control hypotheses by disrupting continuous visual information of a falling ball and examining consistency in movement initiation and duration, and evidence for movement modulation. Participants (n = 12) batted a ball dropped from three different heights (1, 1.3 and 1.5 m), under both full-vision and partial occlusion conditions. In the occlusion condition, only the initial ball drop and the final 200 ms of ball flight to the interception point could be observed. The initiation of the swing did not occur at a consistent TTC, τ, or any other timing variable across drop heights, in contrast with previous research. However, movement onset was not impacted by occluding the ball flight for 280-380 ms. This finding indicates that humans did not need to be continuously coupled to vision of the ball to initiate the swing accurately, but instead could use predictive control based on acceleration timing information (TTC2). However, other results provide evidence for movement modulation, a characteristic of prospective control. Strong correlations between movement initiation and duration and reduced timing variability from swing onset to arrival at the interception point, both support compensatory variability. An analysis of modulation within the swing revealed that early in the swing, the movement acceleration was strongly correlated to the required mean velocity at swing onset and that later in the swing, the movement acceleration was again strongly correlated with the current required mean velocity. Rather than a consistent movement initiated at the same time, these findings show that the swing was variable but modulated for meeting the demands of each trial. A prospective model of coupling τ (bat-ball) with τ (ball-target) was found to provide a very strong linear fit for an average of 69% of the movement duration. These findings provide evidence for predictive control based on TTC2 information in initiating the swing and prospective control based on τ in guiding the bat to intercept the ball.

  20. Immediate Neural Plasticity Involving Reaction Time in a Saccadic Eye Movement Task is Intact in Children With Fetal Alcohol Spectrum Disorder.

    PubMed

    Paolozza, Angelina; Munoz, Douglas P; Brien, Donald; Reynolds, James N

    2016-11-01

    Saccades are rapid eye movements that bring an image of interest onto the retina. Previous research has found that in healthy individuals performing eye movement tasks, the location of a previous visual target can influence performance of the saccade on the next trial. This rapid behavioral adaptation represents a form of immediate neural plasticity within the saccadic circuitry. Our studies have shown that children with fetal alcohol spectrum disorder (FASD) are impaired on multiple saccade measures. We therefore investigated these previous trial effects in typically developing children and children with FASD to measure sensory neural plasticity and how these effects vary with age and pathology. Both typically developing control children (n = 102; mean age = 10.54 ± 3.25; 48 males) and children with FASD (n = 66; mean age = 11.85 ± 3.42; 35 males) were recruited from 5 sites across Canada. Each child performed a visually guided saccade task. Reaction time and saccade amplitude were analyzed and then assessed based on the previous trial. There was a robust previous trial effect for both reaction time and amplitude, with both the control and FASD groups displaying faster reaction times and smaller saccades during alternation trials (visual target presented on the opposite side to the previous trial). Children with FASD exhibited smaller overall mean amplitude and smaller amplitude selectively on alternation trials compared with controls. The effect of the previous trial on reaction time and amplitude did not differ across childhood and adolescent development. Children with FASD did not display any significant reaction time differences, despite exhibiting numerous deficits in motor and higher level cognitive control over saccades in other studies. These results suggest that this form of immediate neural plasticity in response to sensory information before saccade initiation remains intact in children with FASD. In contrast, the previous trial effect on amplitude suggests that the motor component of saccades may be affected, signifying differential vulnerability to prenatal alcohol exposure. Copyright © 2016 by the Research Society on Alcoholism.

  1. Sinusoidal visuomotor tracking: intermittent servo-control or coupled oscillations?

    PubMed

    Russell, D M; Sternad, D

    2001-12-01

    In visuomotor tasks that involve accuracy demands, small directional changes in the trajectories have been taken as evidence of feedback-based error corrections. In the present study variability, or intermittency, in visuomanual tracking of sinusoidal targets was investigated. Two lines of analyses were pursued: First, the hypothesis that humans fundamentally act as intermittent servo-controllers was re-examined, probing the question of whether discontinuities in the movement trajectory directly imply intermittent control. Second, an alternative hypothesis was evaluated: that rhythmic tracking movements are generated by entrainment between the oscillations of the target and the actor, such that intermittency expresses the degree of stability. In 2 experiments, participants (N = 6 in each experiment) swung 1 of 2 different hand-held pendulums, tracking a rhythmic target that oscillated at different frequencies with a constant amplitude. In 1 line of analyses, the authors tested the intermittency hypothesis by using the typical kinematic error measures and spectral analysis. In a 2nd line, they examined relative phase and its variability, following analyses of rhythmic interlimb coordination. The results showed that visually guided corrective processes play a role, especially for slow movements. Intermittency, assessed as frequency and power components of the movement trajectory, was found to change as a function of both target frequency and the manipulandum's inertia. Support for entrainment was found in conditions in which task frequency was identical to or higher than the effector's eigenfrequency. The results suggest that it is the symmetry between task and effector that determines which behavioral regime is dominant.

  2. Avian binocular vision: It's not just about what birds can see, it's also about what they can't.

    PubMed

    Tyrrell, Luke P; Fernández-Juricic, Esteban

    2017-01-01

    With the exception of primates, most vertebrates have laterally placed eyes. Binocular vision in vertebrates has been implicated in several functions, including depth perception, contrast discrimination, etc. However, the blind area in front of the head that is proximal to the binocular visual field is often neglected. This anterior blind area is important when discussing the evolution of binocular vision because its relative length is inversely correlated with the width of the binocular field. Therefore, species with wider binocular fields also have shorter anterior blind areas and objects along the mid-sagittal plane can be imaged at closer distances. Additionally, the anterior blind area is of functional significance for birds because the beak falls within this blind area. We tested for the first time some specific predictions about the functional role of the anterior blind area in birds controlling for phylogenetic effects. We used published data on visual field configuration in 40 species of birds and measured beak and skull parameters from museum specimens. We found that birds with proportionally longer beaks have longer anterior blind areas and thus narrower binocular fields. This result suggests that the anterior blind area and beak visibility do play a role in shaping binocular fields, and that binocular field width is not solely determined by the need for stereoscopic vision. In visually guided foragers, the ability to see the beak-and how much of the beak can be seen-varies predictably with foraging habits. For example, fish- and insect-eating specialists can see more of their own beak than birds eating immobile food can. But in non-visually guided foragers, there is no consistent relationship between the beak and anterior blind area. We discuss different strategies-wide binocular fields, large eye movements, and long beaks-that minimize the potential negative effects of the anterior blind area. Overall, we argue that there is more to avian binocularity than meets the eye.

  3. Visual preference for isochronic movement does not necessarily emerge from movement kinematics: a challenge for the motor simulation theory.

    PubMed

    Bidet-Ildei, Christel; Méary, David; Orliaguet, Jean-Pierre

    2008-01-17

    The aim of this experiment was to show that the visual preference for isochronic movements does not necessarily imply a motor simulation and therefore, does not depend on the kinematics of the perceived movement. To demonstrate this point, the participants' task was to adjust the velocity (the period) of a dot that depicted an elliptic motion with different perimeters (from 3 to 60 cm). The velocity profile of the movement conformed ("natural motions") or not ("unnatural motions") to the law of co-variation velocity-curvature (two-thirds power law), which is usually observed in the production of elliptic movements. For each condition, we evaluated the isochrony principle, i.e., the tendency to prefer constant durations of movement irrespective to changes in the trajectory perimeter. Our findings indicate that isochrony principle was observed whatever the kinematics of the movement (natural or unnatural). Therefore, they suggest that the perceptive preference for isochronic movements does not systematically imply a motor simulation.

  4. The Functional Equivalence between Movement Imagery, Observation, and Execution Influences Imagery Ability

    ERIC Educational Resources Information Center

    Williams, Sarah E.; Cumming, Jennifer; Edwards, Martin G.

    2011-01-01

    Based on literature identifying movement imagery, observation, and execution to elicit similar areas of neural activity, research has demonstrated that movement imagery and observation successfully prime movement execution. To investigate whether movement and observation could prime ease of imaging from an external visual-imagery perspective, an…

  5. Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish

    PubMed Central

    Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian

    2011-01-01

    Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793

  6. Body posture differentially impacts on visual attention towards tool, graspable, and non-graspable objects.

    PubMed

    Ambrosini, Ettore; Costantini, Marcello

    2017-02-01

    Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Myosin II Motors and F-Actin Dynamics Drive the Coordinated Movement of the Centrosome and Soma during CNS Glial-Guided Neuronal Migration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solecki, Dr. David; Trivedi, Dr. Niraj; Govek, Eve-Ellen

    2009-01-01

    Lamination of cortical regions of the vertebrate brain depends on glial-guided neuronal migration. The conserved polarity protein Par6{alpha} localizes to the centrosome and coordinates forward movement of the centrosome and soma in migrating neurons. The cytoskeletal components that produce this unique form of cell polarity and their relationship to polarity signaling cascades are unknown. We show that F-actin and Myosin II motors are enriched in the neuronal leading process and that Myosin II activity is necessary for leading process actin dynamics. Inhibition of Myosin II decreased the speed of centrosome and somal movement, whereas Myosin II activation increased coordinated movement.more » Ectopic expression or silencing of Par6{alpha} inhibited Myosin II motors by decreasing Myosin light-chain phosphorylation. These findings suggest leading-process Myosin II may function to 'pull' the centrosome and soma forward during glial-guided migration by a mechanism involving the conserved polarity protein Par6{alpha}.« less

  8. Event processing in the visual world: Projected motion paths during spoken sentence comprehension.

    PubMed

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-05-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Adaptive Acceleration of Visually Evoked Smooth Eye Movements in Mice

    PubMed Central

    2016-01-01

    The optokinetic response (OKR) consists of smooth eye movements following global motion of the visual surround, which suppress image slip on the retina for visual acuity. The effective performance of the OKR is limited to rather slow and low-frequency visual stimuli, although it can be adaptably improved by cerebellum-dependent mechanisms. To better understand circuit mechanisms constraining OKR performance, we monitored how distinct kinematic features of the OKR change over the course of OKR adaptation, and found that eye acceleration at stimulus onset primarily limited OKR performance but could be dramatically potentiated by visual experience. Eye acceleration in the temporal-to-nasal direction depended more on the ipsilateral floccular complex of the cerebellum than did that in the nasal-to-temporal direction. Gaze-holding following the OKR was also modified in parallel with eye-acceleration potentiation. Optogenetic manipulation revealed that synchronous excitation and inhibition of floccular complex Purkinje cells could effectively accelerate eye movements in the nasotemporal and temporonasal directions, respectively. These results collectively delineate multiple motor pathways subserving distinct aspects of the OKR in mice and constrain hypotheses regarding cellular mechanisms of the cerebellum-dependent tuning of movement acceleration. SIGNIFICANCE STATEMENT Although visually evoked smooth eye movements, known as the optokinetic response (OKR), have been studied in various species for decades, circuit mechanisms of oculomotor control and adaptation remain elusive. In the present study, we assessed kinematics of the mouse OKR through the course of adaptation training. Our analyses revealed that eye acceleration at visual-stimulus onset primarily limited working velocity and frequency range of the OKR, yet could be dramatically potentiated during OKR adaptation. Potentiation of eye acceleration exhibited different properties between the nasotemporal and temporonasal OKRs, indicating distinct visuomotor circuits underlying the two. Lesions and optogenetic manipulation of the cerebellum provide constraints on neural circuits mediating visually driven eye acceleration and its adaptation. PMID:27335412

  10. Movement Integration and the One-Target Advantage.

    PubMed

    Hoffmann, Errol R

    2017-01-01

    The 1-target advantage (OTA) has been found to occur in many circumstances and the current best explanation for this phenomenon is that of the movement integration hypothesis. The author's purpose is twofold: (a) to model the conditions under which there is integration of the movement components in a 2-component movement and (b) to study the factors that determine the magnitude of the OTA for both the first and second component of a 2-component movement. Results indicate that integration of movement components, where times for one component are affected by the geometry of the other component, occurs when 1 of the movement components is made ballistically. Movement components that require ongoing visual control show only weak interaction with the second component, whereas components made ballistically always show movement time dependence on first and second component amplitude, independent of location within the sequence. The OTA is present on both the first and second components of the movement, with a magnitude that is dependent on whether the components are performed ballistically or with ongoing visual control and also on the amplitudes and indexes of difficulty of the component movements.

  11. Driving with Binocular Visual Field Loss? A Study on a Supervised On-Road Parcours with Simultaneous Eye and Head Tracking

    PubMed Central

    Aehling, Kathrin; Heister, Martin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena

    2014-01-01

    Post-chiasmal visual pathway lesions and glaucomatous optic neuropathy cause binocular visual field defects (VFDs) that may critically interfere with quality of life and driving licensure. The aims of this study were (i) to assess the on-road driving performance of patients suffering from binocular visual field loss using a dual-brake vehicle, and (ii) to investigate the related compensatory mechanisms. A driving instructor, blinded to the participants' diagnosis, rated the driving performance (passed/failed) of ten patients with homonymous visual field defects (HP), including four patients with right (HR) and six patients with left homonymous visual field defects (HL), ten glaucoma patients (GP), and twenty age and gender-related ophthalmologically healthy control subjects (C) during a 40-minute driving task on a pre-specified public on-road parcours. In order to investigate the subjects' visual exploration ability, eye movements were recorded by means of a mobile eye tracker. Two additional cameras were used to monitor the driving scene and record head and shoulder movements. Thus this study is novel as a quantitative assessment of eye movements and an additional evaluation of head and shoulder was performed. Six out of ten HP and four out of ten GP were rated as fit to drive by the driving instructor, despite their binocular visual field loss. Three out of 20 control subjects failed the on-road assessment. The extent of the visual field defect was of minor importance with regard to the driving performance. The site of the homonymous visual field defect (HVFD) critically interfered with the driving ability: all failed HP subjects suffered from left homonymous visual field loss (HL) due to right hemispheric lesions. Patients who failed the driving assessment had mainly difficulties with lane keeping and gap judgment ability. Patients who passed the test displayed different exploration patterns than those who failed. Patients who passed focused longer on the central area of the visual field than patients who failed the test. In addition, patients who passed the test performed more glances towards the area of their visual field defect. In conclusion, our findings support the hypothesis that the extent of visual field per se cannot predict driving fitness, because some patients with HVFDs and advanced glaucoma can compensate for their deficit by effective visual scanning. Head movements appeared to be superior to eye and shoulder movements in predicting the outcome of the driving test under the present study scenario. PMID:24523869

  12. The eye movements of dyslexic children during reading and visual search: impact of the visual attention span.

    PubMed

    Prado, Chloé; Dubois, Matthieu; Valdois, Sylviane

    2007-09-01

    The eye movements of 14 French dyslexic children having a VA span reduction and 14 normal readers were compared in two tasks of visual search and text reading. The dyslexic participants made a higher number of rightward fixations in reading only. They simultaneously processed the same low number of letters in both tasks whereas normal readers processed far more letters in reading. Importantly, the children's VA span abilities related to the number of letters simultaneously processed in reading. The atypical eye movements of some dyslexic readers in reading thus appear to reflect difficulties to increase their VA span according to the task request.

  13. Displays. [three dimensional analog visual system for aiding pilot space perception

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An experimental investigation made to determine the depth cue of a head movement perspective and image intensity as a function of depth is summarized. The experiment was based on the use of a hybrid computer generated contact analog visual display in which various perceptual depth cues are included on a two dimensional CRT screen. The system's purpose was to impart information, in an integrated and visually compelling fashion, about the vehicle's position and orientation in space. Results show head movement gives a 40% improvement in depth discrimination when the display is between 40 and 100 cm from the subject; intensity variation resulted in as much improvement as head movement.

  14. Towards photorealistic and immersive virtual-reality environments for simulated prosthetic vision: integrating recent breakthroughs in consumer hardware and software.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J

    2014-01-01

    Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.

  15. Using an auditory sensory substitution device to augment vision: evidence from eye movements.

    PubMed

    Wright, Thomas D; Margolis, Aaron; Ward, Jamie

    2015-03-01

    Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.

  16. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  17. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  18. Consequences of Beauty: Effects of Rater Sex and Sexual Orientation on the Visual Exploration and Evaluation of Attractiveness in Real World Scenes

    PubMed Central

    Mitrovic, Aleksandra; Tinio, Pablo P. L.; Leder, Helmut

    2016-01-01

    One of the key behavioral effects of attractiveness is increased visual attention to attractive people. This effect is often explained in terms of evolutionary adaptations, such as attractiveness being an indicator of good health. Other factors could influence this effect. In the present study, we explored the modulating role of sexual orientation on the effects of attractiveness on exploratory visual behavior. Heterosexual and homosexual men and women viewed natural-looking scenes that depicted either two women or two men who varied systematically in levels of attractiveness (based on a pre-study). Participants’ eye movements and attractiveness ratings toward the faces of the depicted people were recorded. The results showed that although attractiveness had the largest influence on participants’ behaviors, participants’ sexual orientations strongly modulated the effects. With the exception of homosexual women, all participant groups looked longer and more often at attractive faces that corresponded with their sexual orientations. Interestingly, heterosexual and homosexual men and homosexual women looked longer and more often at the less attractive face of their non-preferred sex than the less attractive face of their preferred sex, evidence that less attractive faces of the preferred sex might have an aversive character. These findings provide evidence for the important role that sexual orientation plays in guiding visual exploratory behavior and evaluations of the attractiveness of others. PMID:27047365

  19. Consequences of Beauty: Effects of Rater Sex and Sexual Orientation on the Visual Exploration and Evaluation of Attractiveness in Real World Scenes.

    PubMed

    Mitrovic, Aleksandra; Tinio, Pablo P L; Leder, Helmut

    2016-01-01

    One of the key behavioral effects of attractiveness is increased visual attention to attractive people. This effect is often explained in terms of evolutionary adaptations, such as attractiveness being an indicator of good health. Other factors could influence this effect. In the present study, we explored the modulating role of sexual orientation on the effects of attractiveness on exploratory visual behavior. Heterosexual and homosexual men and women viewed natural-looking scenes that depicted either two women or two men who varied systematically in levels of attractiveness (based on a pre-study). Participants' eye movements and attractiveness ratings toward the faces of the depicted people were recorded. The results showed that although attractiveness had the largest influence on participants' behaviors, participants' sexual orientations strongly modulated the effects. With the exception of homosexual women, all participant groups looked longer and more often at attractive faces that corresponded with their sexual orientations. Interestingly, heterosexual and homosexual men and homosexual women looked longer and more often at the less attractive face of their non-preferred sex than the less attractive face of their preferred sex, evidence that less attractive faces of the preferred sex might have an aversive character. These findings provide evidence for the important role that sexual orientation plays in guiding visual exploratory behavior and evaluations of the attractiveness of others.

  20. Foveal analysis and peripheral selection during active visual sampling

    PubMed Central

    Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.

    2014-01-01

    Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588

  1. Optical phonetics and visual perception of lexical and phrasal stress in English.

    PubMed

    Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer

    2009-01-01

    In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.

  2. Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    PubMed Central

    Rigoulot, Simon; Pell, Marc D.

    2012-01-01

    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions. PMID:22303454

  3. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.

    PubMed

    Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E

    2013-08-01

    Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  6. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  7. Creative Movement Classes for Visually Handicapped Children in a Public School Setting

    ERIC Educational Resources Information Center

    Resnick, Rose

    1973-01-01

    To counteract the lack of healthful physical activities offered for visually handicapped children in San Francisco public schools, a creative movement program was implemented for eight girls and four boys, 6 to 20 years of age, who were blind or partially sighted, and ranged in intelligence from retarded to bright. (MC)

  8. The Need for Motor Development Programs for Visually Impaired Preschoolers.

    ERIC Educational Resources Information Center

    Palazesi, Margot A.

    1986-01-01

    The paper advocates the development of movement programs for preschool visually impaired children to compensate for their orientation deficits. The author asserts that skills necessary for acquisition of spatial concepts should be taught through movement programs at an early age in the normal developmental sequence instead of attempting to remedy…

  9. Speaker Identity Supports Phonetic Category Learning

    ERIC Educational Resources Information Center

    Mani, Nivedita; Schneider, Signe

    2013-01-01

    Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…

  10. Visual Short-Term Memory During Smooth Pursuit Eye Movements

    ERIC Educational Resources Information Center

    Kerzel, Dirk; Ziegler, Nathalie E.

    2005-01-01

    Visual short-term memory (VSTM) was probed while observers performed smooth pursuit eye movements. Smooth pursuit keeps a moving object stabilized in the fovea. VSTM capacity for position was reduced during smooth pursuit compared with a condition with eye fixation. There was no difference between a condition in which the items were approximately…

  11. Effects of kinesthetic versus visual imagery practice on two technical dance movements: a pilot study.

    PubMed

    Girón, Elizabeth Coker; McIsaac, Tara; Nilsen, Dawn

    2012-03-01

    Motor imagery is a type of mental practice that involves imagining the body performing a movement in the absence of motor output. Dance training traditionally incorporates mental practice techniques, but quantitative effects of motor imagery on the performance of dance movements are largely unknown. This pilot study compared the effects of two different imagery modalities, external visual imagery and kinesthetic imagery, on pelvis and hip kinematics during two technical dance movements, plié and sauté. Each of three female dance students (mean age = 19.7 years, mean years of training = 10.7) was assigned to use a type of imagery practice: visual imagery, kinesthetic imagery, or no imagery. Effects of motor imagery on peak external hip rotation varied by both modality and task. Kinesthetic imagery increased peak external hip rotation for pliés, while visual imagery increased peak external hip rotation for sautés. Findings suggest that the success of motor imagery in improving performance may be task-specific. Dancers may benefit from matching imagery modality to technical tasks in order to improve alignment and thereby avoid chronic injury.

  12. Semantic Enrichment of Movement Behavior with Foursquare--A Visual Analytics Approach.

    PubMed

    Krueger, Robert; Thom, Dennis; Ertl, Thomas

    2015-08-01

    In recent years, many approaches have been developed that efficiently and effectively visualize movement data, e.g., by providing suitable aggregation strategies to reduce visual clutter. Analysts can use them to identify distinct movement patterns, such as trajectories with similar direction, form, length, and speed. However, less effort has been spent on finding the semantics behind movements, i.e. why somebody or something is moving. This can be of great value for different applications, such as product usage and consumer analysis, to better understand urban dynamics, and to improve situational awareness. Unfortunately, semantic information often gets lost when data is recorded. Thus, we suggest to enrich trajectory data with POI information using social media services and show how semantic insights can be gained. Furthermore, we show how to handle semantic uncertainties in time and space, which result from noisy, unprecise, and missing data, by introducing a POI decision model in combination with highly interactive visualizations. Finally, we evaluate our approach with two case studies on a large electric scooter data set and test our model on data with known ground truth.

  13. Game-Based Augmented Visual Feedback for Enlarging Speech Movements in Parkinson's Disease.

    PubMed

    Yunusova, Yana; Kearney, Elaine; Kulkarni, Madhura; Haworth, Brandon; Baljko, Melanie; Faloutsos, Petros

    2017-06-22

    The purpose of this pilot study was to demonstrate the effect of augmented visual feedback on acquisition and short-term retention of a relatively simple instruction to increase movement amplitude during speaking tasks in patients with dysarthria due to Parkinson's disease (PD). Nine patients diagnosed with PD, hypokinetic dysarthria, and impaired speech intelligibility participated in a training program aimed at increasing the size of their articulatory (tongue) movements during sentences. Two sessions were conducted: a baseline and training session, followed by a retention session 48 hr later. At baseline, sentences were produced at normal, loud, and clear speaking conditions. Game-based visual feedback regarding the size of the articulatory working space (AWS) was presented during training. Eight of nine participants benefited from training, increasing their sentence AWS to a greater degree following feedback as compared with the baseline loud and clear conditions. The majority of participants were able to demonstrate the learned skill at the retention session. This study demonstrated the feasibility of augmented visual feedback via articulatory kinematics for training movement enlargement in patients with hypokinesia due to PD. https://doi.org/10.23641/asha.5116840.

  14. Kinesthetic information disambiguates visual motion signals.

    PubMed

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Memory and decision making in the frontal cortex during visual motion processing for smooth pursuit eye movements.

    PubMed

    Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro

    2009-06-11

    Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.

  16. Gunslinger Effect and Müller-Lyer Illusion: Examining Early Visual Information Processing for Late Limb-Target Control.

    PubMed

    Roberts, James W; Lyons, James; Garcia, Daniel B L; Burgess, Raquel; Elliott, Digby

    2017-07-01

    The multiple process model contends that there are two forms of online control for manual aiming: impulse regulation and limb-target control. This study examined the impact of visual information processing for limb-target control. We amalgamated the Gunslinger protocol (i.e., faster movements following a reaction to an external trigger compared with the spontaneous initiation of movement) and Müller-Lyer target configurations into the same aiming protocol. The results showed the Gunslinger effect was isolated at the early portions of the movement (peak acceleration and peak velocity). Reacted aims reached a longer displacement at peak deceleration, but no differences for movement termination. The target configurations manifested terminal biases consistent with the illusion. We suggest the visual information processing demands imposed by reacted aims can be adapted by integrating early feedforward information for limb-target control.

  17. Moving Stimuli Facilitate Synchronization But Not Temporal Perception

    PubMed Central

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419

  18. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    PubMed

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  19. AMERICAN STANDARD GUIDE FOR SCHOOL LIGHTING.

    ERIC Educational Resources Information Center

    Illuminating Engineering Society, New York, NY.

    THIS IS A GUIDE FOR SCHOOL LIGHTING, DESIGNED FOR EDUCATORS AS WELL AS ARCHITECTS. IT MAKES USE OF RECENT RESEARCH, NOTABLY THE BLACKWELL REPORT ON EVALUATION OF VISUAL TASKS. THE GUIDE BEGINS WITH AN OVERVIEW OF CHANGING GOALS AND NEEDS OF SCHOOL LIGHTING, AND A TABULATION OF COMMON CLASSROOM VISUAL TASKS THAT REQUIRE VARIATIONS IN LIGHTING.…

  20. Cognitive and Psychiatric Phenotypes of Movement Disorders in Children: A Systematic Review

    ERIC Educational Resources Information Center

    Ben-Pazi, Hilla; Jaworowski, Solomon; Shalev, Ruth S

    2011-01-01

    Aim: The cognitive and psychiatric aspects of adult movement disorders are well established, but specific behavioural profiles for paediatric movement disorders have not been delineated. Knowledge of non-motor phenotypes may guide treatment and determine which symptoms are suggestive of a specific movement disorder and which indicate medication…

Top