Sample records for visually targeted reaching

  1. Visual cortex activation in kinesthetic guidance of reaching.

    PubMed

    Darling, W G; Seitz, R J; Peltier, S; Tellmann, L; Butler, A J

    2007-06-01

    The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used (15)O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.

  2. Effect of Visual Field Presentation on Action Planning (Estimating Reach) in Children

    ERIC Educational Resources Information Center

    Gabbard, Carl; Cordova, Alberto

    2012-01-01

    In this article, the authors examined the effects of target information presented in different visual fields (lower, upper, central) on estimates of reach via use of motor imagery in children (5-11 years old) and young adults. Results indicated an advantage for estimating reach movements for targets placed in lower visual field (LoVF), with all…

  3. Seeing the hand while reaching speeds up on-line responses to a sudden change in target position

    PubMed Central

    Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre

    2009-01-01

    Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067

  4. Calibrating Reach Distance to Visual Targets

    ERIC Educational Resources Information Center

    Mon-Williams, Mark; Bingham, Geoffrey P.

    2007-01-01

    The authors investigated the calibration of reach distance by gradually distorting the haptic feedback obtained when participants grasped visible target objects. The authors found that the modified relationship between visually specified distance and reach distance could be captured by a straight-line mapping function. Thus, the relation could be…

  5. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  6. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  7. Online control of reaching and pointing to visual, auditory, and multimodal targets: Effects of target modality and method of determining correction latency.

    PubMed

    Holmes, Nicholas P; Dakwar, Azar R

    2015-12-01

    Movements aimed towards objects occasionally have to be adjusted when the object moves. These online adjustments can be very rapid, occurring in as little as 100ms. More is known about the latency and neural basis of online control of movements to visual than to auditory target objects. We examined the latency of online corrections in reaching-to-point movements to visual and auditory targets that could change side and/or modality at movement onset. Visual or auditory targets were presented on the left or right sides, and participants were instructed to reach and point to them as quickly and as accurately as possible. On half of the trials, the targets changed side at movement onset, and participants had to correct their movements to point to the new target location as quickly as possible. Given different published approaches to measuring the latency for initiating movement corrections, we examined several different methods systematically. What we describe here as the optimal methods involved fitting a straight-line model to the velocity of the correction movement, rather than using a statistical criterion to determine correction onset. In the multimodal experiment, these model-fitting methods produced significantly lower latencies for correcting movements away from the auditory targets than away from the visual targets. Our results confirm that rapid online correction is possible for auditory targets, but further work is required to determine whether the underlying control system for reaching and pointing movements is the same for auditory and visual targets. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. The visual properties of proximal and remote distractors differentially influence reaching planning times: evidence from pro- and antipointing tasks.

    PubMed

    Heath, Matthew; DeSimone, Jesse C

    2016-11-01

    The saccade literature has consistently reported that the presentation of a distractor remote to a target increases reaction time (i.e., the remote distractor effect: RDE). As well, some studies have shown that a proximal distractor facilitates saccade reaction time. The lateral inhibition hypothesis attributes the aforementioned findings to the inhibition/facilitation of target selection mechanisms operating in the intermediate layers of the superior colliculus (SC). Although the impact of remote and proximal distractors has been extensively examined in the saccade literature, a paucity of work has examined whether such findings generalize to reaching responses, and to our knowledge, no work has directly contrasted reaching RTs for remote and proximal distractors. To that end, the present investigation had participants complete reaches in target only trials (i.e., TO) and when distractors were presented at "remote" (i.e., the opposite visual field) and "proximal" (i.e., the same visual field) locations along the same horizontal meridian as the target. As well, participants reached to the target's veridical (i.e., propointing) and mirror-symmetrical (i.e., antipointing) location. The basis for contrasting pro- and antipointing was to determine whether the distractor's visual- or motor-related activity influence reaching RTs. Results demonstrated that remote and proximal distractors, respectively, increased and decreased reaching RTs and the effect was consistent for pro- and antipointing. Accordingly, results evince that the RDE and the facilitatory effects of a proximal distractor are effector independent and provide behavioral support for the contention that the SC serves as a general target selection mechanism. As well, the comparable distractor-related effects for pro- and antipointing trials indicate that the visual properties of remote and proximal distractors respectively inhibit and facilitate target selection.

  9. Role of the posterior parietal cortex in updating reaching movements to a visual target.

    PubMed

    Desmurget, M; Epstein, C M; Turner, R S; Prablanc, C; Alexander, G E; Grafton, S T

    1999-06-01

    The exact role of posterior parietal cortex (PPC) in visually directed reaching is unknown. We propose that, by building an internal representation of instantaneous hand location, PPC computes a dynamic motor error used by motor centers to correct the ongoing trajectory. With unseen right hands, five subjects pointed to visual targets that either remained stationary or moved during saccadic eye movements. Transcranial magnetic stimulation (TMS) was applied over the left PPC during target presentation. Stimulation disrupted path corrections that normally occur in response to target jumps, but had no effect on those directed at stationary targets. Furthermore, left-hand movement corrections were not blocked, ruling out visual or oculomotor effects of stimulation.

  10. A solution to the online guidance problem for targeted reaches: proportional rate control using relative disparity tau.

    PubMed

    Anderson, Joe; Bingham, Geoffrey P

    2010-09-01

    We provide a solution to a major problem in visually guided reaching. Research has shown that binocular vision plays an important role in the online visual guidance of reaching, but the visual information and strategy used to guide a reach remains unknown. We propose a new theory of visual guidance of reaching including a new information variable, tau(alpha) (relative disparity tau), and a novel control strategy that allows actors to guide their reach trajectories visually by maintaining a constant proportion between tau(alpha) and its rate of change. The dynamical model couples the information to the reaching movement to generate trajectories characteristic of human reaching. We tested the theory in two experiments in which participants reached under conditions of darkness to guide a visible point either on a sliding apparatus or on their finger to a point-light target in depth. Slider apparatus controlled for a simple mapping from visual to proprioceptive space. When reaching with their finger, participants were forced, by perturbation of visual information used for feedforward control, to use online control with only binocular disparity-based information for guidance. Statistical analyses of trajectories strongly supported the theory. Simulations of the model were compared statistically to actual reaching trajectories. The results supported the theory, showing that tau(alpha) provides a source of information for the control of visually guided reaching and that participants use this information in a proportional rate control strategy.

  11. Goal-directed action is automatically biased towards looming motion

    PubMed Central

    Moher, Jeff; Sit, Jonathan; Song, Joo-Hyun

    2014-01-01

    It is known that looming motion can capture attention regardless of an observer’s intentions. Real-world behavior, however, frequently involves not just attentional selection, but selection for action. Thus, it is important to understand the impact of looming motion on goal-directed action to gain a broader perspective on how stimulus properties bias human behavior. We presented participants with a visually-guided reaching task in which they pointed to a target letter presented among non-target distractors. On some trials, one of the pre-masks at the location of the upcoming search objects grew rapidly in size, creating the appearance of a “looming” target or distractor. Even though looming motion did not predict the target location, the time required to reach to the target was shorter when the target loomed compared to when a distractor loomed. Furthermore, reach movement trajectories were pulled towards the location of a looming distractor when one was present, a pull that was greater still when the looming motion was on a collision path with the participant. We also contrast reaching data with data from a similarly designed visual search task requiring keypress responses. This comparison underscores the sensitivity of visually-guided reaching data, as some experimental manipulations, such as looming motion path, affected reach trajectories but not keypress measures. Together, the results demonstrate that looming motion biases visually-guided action regardless of an observer’s current behavioral goals, affecting not only the time required to reach to targets but also the path of the observer’s hand movement itself. PMID:25159287

  12. Spatial effects of shifting prisms on properties of posterior parietal cortex neurons

    PubMed Central

    Karkhanis, Anushree N; Heider, Barbara; Silva, Fabian Muñoz; Siegel, Ralph M

    2014-01-01

    The posterior parietal cortex contains neurons that respond to visual stimulation and motor behaviour. The objective of the current study was to test short-term adaptation in neurons in macaque area 7a and the dorsal prelunate during visually guided reaching using Fresnel prisms that displaced the visual field. The visual perturbation shifted the eye position and created a mismatch between perceived and actual reach location. Two non-human primates were trained to reach to visual targets before, during and after prism exposure while fixating the reach target in different locations. They were required to reach to the physical location of the reach target and not the perceived, displaced location. While behavioural adaptation to the prisms occurred within a few trials, the majority of neurons responded to the distortion either with substantial changes in spatial eye position tuning or changes in overall firing rate. These changes persisted even after prism removal. The spatial changes were not correlated with the direction of induced prism shift. The transformation of gain fields between conditions was estimated by calculating the translation and rotation in Euler angles. Rotations and translations of the horizontal and vertical spatial components occurred in a systematic manner for the population of neurons suggesting that the posterior parietal cortex retains a constant representation of the visual field remapping between experimental conditions. PMID:24928956

  13. Reaching with cerebral tunnel vision.

    PubMed

    Rizzo, M; Darling, W

    1997-01-01

    We studied reaching movements in a 48-year-old man with bilateral lesions of the calcarine cortex which spared the foveal representation and caused severe tunnel vision. Three-dimensional (3D) reconstruction of brain MR images showed no evidence of damage beyond area 18. The patient could not see his hand during reaching movements, providing a unique opportunity to test the role of peripheral visual cues in limb control. Optoelectronic recordings of upper limb movements showed normal hand paths and trajectories to fixated extrinsic targets. There was no slowing, tremor, or ataxia. Self-bound movements were also preserved. Analyses of limb orientation at the endpoints of reaches showed that the patient could transform an extrinsic target's visual coordinates to an appropriate upper limb configuration for target acquisition. There was no disadvantage created by blocking the view of the reaching arm. Moreover, the patient could not locate targets presented in the hemianopic fields by pointing. Thus, residual nonconscious vision or 'blindsight' in the aberrant fields was not a factor in our patient's reaching performance. The findings in this study show that peripheral visual cues on the position and velocity of the moving limb are not critical to the control of goal directed reaches, at least not until the hand is close to target. Other cues such as kinesthetic feedback can suffice. It also appears that the visuomotor transformations for reaching do not take place before area 19 in humans.

  14. Neural correlates of target selection for reaching movements in superior colliculus

    PubMed Central

    McPeek, Robert M.

    2014-01-01

    We recently demonstrated that inactivation of the primate superior colliculus (SC) causes a deficit in target selection for arm-reaching movements when the reach target is located in the inactivated field (Song JH, Rafal RD, McPeek RM. Proc Natl Acad Sci USA 108: E1433–E1440, 2011). This is consistent with the notion that the SC is part of a general-purpose target selection network beyond eye movements. To understand better the role of SC activity in reach target selection, we examined how individual SC neurons in the intermediate layers discriminate a reach target from distractors. Monkeys reached to touch a color oddball target among distractors while maintaining fixation. We found that many SC neurons robustly discriminate the goal of the reaching movement before the onset of the reach even though no saccade is made. To identify these cells in the context of conventional SC cell classification schemes, we also recorded visual, delay-period, and saccade-related responses in a delayed saccade task. On average, SC cells that discriminated the reach target from distractors showed significantly higher visual and delay-period activity than nondiscriminating cells, but there was no significant difference in saccade-related activity. Whereas a majority of SC neurons that discriminated the reach target showed significant delay-period activity, all nondiscriminating cells lacked such activity. We also found that some cells without delay-period activity did discriminate the reach target from distractors. We conclude that the majority of intermediate-layer SC cells discriminate a reach target from distractors, consistent with the idea that the SC contains a priority map used for effector-independent target selection. PMID:25505107

  15. Memory for Spatial Locations in a Patient with Near Space Neglect and Optic Ataxia: Involvement of the Occipitotemporal Stream

    PubMed Central

    Chieffi, Sergio; Messina, Giovanni; Messina, Antonietta; Villano, Ines; Monda, Vincenzo; Ambra, Ferdinando Ivano; Garofalo, Elisabetta; Romano, Felice; Mollica, Maria Pina; Monda, Marcellino; Iavarone, Alessandro

    2017-01-01

    Previous studies suggested that the occipitoparietal stream orients attention toward the near/lower space and is involved in immediate reaching, whereas the occipitotemporal stream orients attention toward the far/upper space and is involved in delayed reaching. In the present study, we investigated the role of the occipitotemporal stream in attention orienting and delayed reaching in a patient (GP) with bilateral damage to the occipitoparietal areas and optic ataxia. GP and healthy controls took part in three experiments. In the experiment 1, the participants bisected lines oriented along radial, vertical, and horizontal axes. GP bisected radial lines farther, and vertical lines more above, than the controls, consistent with an attentional bias toward the far/upper space and near/lower space neglect. The experiment 2 consisted of two tasks: (1) an immediate reaching task, in which GP reached target locations under visual control and (2) a delayed visual reaching task, in which GP and controls were asked to reach remembered target locations visually presented. We measured constant and variable distance and direction errors. In immediate reaching task, GP accurately reached target locations. In delayed reaching task, GP overshot remembered target locations, whereas the controls undershot them. Furthermore, variable errors were greater in GP than in the controls. In the experiment 3, GP and controls performed a delayed proprioceptive reaching task. Constant reaching errors did not differ between GP and the controls. However, variable direction errors were greater in GP than in the controls. We suggest that the occipitoparietal damage, and the relatively intact occipitotemporal region, produced in GP an attentional orienting bias toward the far/upper space (experiment 1). In turns, the attentional bias selectively shifted toward the far space remembered visual (experiment 2), but not proprioceptive (experiment 3), target locations. As a whole, these findings further support the hypothesis of an involvement of the occipitotemporal stream in delayed reaching. Furthermore, the observation that in both delayed reaching tasks the variable errors were greater in GP than in the controls suggested that in optic ataxia is present not only a visuo- but also a proprioceptivo-motor integration deficit. PMID:28620345

  16. Behavioral Investigation on the Frames of Reference Involved in Visuomotor Transformations during Peripheral Arm Reaching

    PubMed Central

    Pelle, Gina; Perrucci, Mauro Gianni; Galati, Gaspare; Fattori, Patrizia; Galletti, Claudio; Committeri, Giorgia

    2012-01-01

    Background Several psychophysical experiments found evidence for the involvement of gaze-centered and/or body-centered coordinates in arm-movement planning and execution. Here we aimed at investigating the frames of reference involved in the visuomotor transformations for reaching towards visual targets in space by taking target eccentricity and performing hand into account. Methodology/Principal Findings We examined several performance measures while subjects reached, in complete darkness, memorized targets situated at different locations relative to the gaze and/or to the body, thus distinguishing between an eye-centered and a body-centered frame of reference involved in the computation of the movement vector. The errors seem to be mainly affected by the visual hemifield of the target, independently from its location relative to the body, with an overestimation error in the horizontal reaching dimension (retinal exaggeration effect). The use of several target locations within the perifoveal visual field allowed us to reveal a novel finding, that is, a positive linear correlation between horizontal overestimation errors and target retinal eccentricity. In addition, we found an independent influence of the performing hand on the visuomotor transformation process, with each hand misreaching towards the ipsilateral side. Conclusions While supporting the existence of an internal mechanism of target-effector integration in multiple frames of reference, the present data, especially the linear overshoot at small target eccentricities, clearly indicate the primary role of gaze-centered coding of target location in the visuomotor transformation for reaching. PMID:23272180

  17. Congenitally blind individuals rapidly adapt to coriolis force perturbations of their reaching movements

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Lackner, J. R.

    2000-01-01

    Reaching movements made to visual targets in a rotating room are initially deviated in path and endpoint in the direction of transient Coriolis forces generated by the motion of the arm relative to the rotating environment. With additional reaches, movements become progressively straighter and more accurate. Such adaptation can occur even in the absence of visual feedback about movement progression or terminus. Here we examined whether congenitally blind and sighted subjects without visual feedback would demonstrate adaptation to Coriolis forces when they pointed to a haptically specified target location. Subjects were tested pre-, per-, and postrotation at 10 rpm counterclockwise. Reaching to straight ahead targets prerotation, both groups exhibited slightly curved paths. Per-rotation, both groups showed large initial deviations of movement path and curvature but within 12 reaches on average had returned to prerotation curvature levels and endpoints. Postrotation, both groups showed mirror image patterns of curvature and endpoint to the per-rotation pattern. The groups did not differ significantly on any of the performance measures. These results provide compelling evidence that motor adaptation to Coriolis perturbations can be achieved on the basis of proprioceptive, somatosensory, and motor information in the complete absence of visual experience.

  18. Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information

    PubMed Central

    Strauss, Soeren; Woodgate, Philip J.W.; Sami, Saber A.; Heinke, Dietmar

    2015-01-01

    We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain’s attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO’s predictions and also lessons for neurobiologically inspired robotics emerging from this work. PMID:26667353

  19. Gaze anchoring guides real but not pantomime reach-to-grasp: support for the action-perception theory.

    PubMed

    Kuntz, Jessica R; Karl, Jenni M; Doan, Jon B; Whishaw, Ian Q

    2018-04-01

    Reach-to-grasp movements feature the integration of a reach directed by the extrinsic (location) features of a target and a grasp directed by the intrinsic (size, shape) features of a target. The action-perception theory suggests that integration and scaling of a reach-to-grasp movement, including its trajectory and the concurrent digit shaping, are features that depend upon online action pathways of the dorsal visuomotor stream. Scaling is much less accurate for a pantomime reach-to-grasp movement, a pretend reach with the target object absent. Thus, the action-perception theory proposes that pantomime movement is mediated by perceptual pathways of the ventral visuomotor stream. A distinguishing visual feature of a real reach-to-grasp movement is gaze anchoring, in which a participant visually fixates the target throughout the reach and disengages, often by blinking or looking away/averting the head, at about the time that the target is grasped. The present study examined whether gaze anchoring is associated with pantomime reaching. The eye and hand movements of participants were recorded as they reached for a ball of one of three sizes, located on a pedestal at arms' length, or pantomimed the same reach with the ball and pedestal absent. The kinematic measures for real reach-to-grasp movements were coupled to the location and size of the target, whereas the kinematic measures for pantomime reach-to-grasp, although grossly reflecting target features, were significantly altered. Gaze anchoring was also tightly coupled to the target for real reach-to-grasp movements, but there was no systematic focus for gaze, either in relation with the virtual target, the previous location of the target, or the participant's reaching hand, for pantomime reach-to-grasp. The presence of gaze anchoring during real vs. its absence in pantomime reach-to-grasp supports the action-perception theory that real, but not pantomime, reaches are online visuomotor actions and is discussed in relation with the neural control of real and pantomime reach-to-grasp movements.

  20. Reaching a Moveable Visual Target: Dissociations in Brain Tumour Patients

    ERIC Educational Resources Information Center

    Buiatti, Tania; Skrap, Miran; Shallice, Tim

    2013-01-01

    Damage to the posterior parietal cortex (PPC) can lead to Optic Ataxia (OA), in which patients misreach to peripheral targets. Recent research suggested that the PPC might be involved not only in simple reaching tasks toward peripheral targets, but also in changing the hand movement trajectory in real time if the target moves. The present study…

  1. Allocentric information is used for memory-guided reaching in depth: A virtual reality study.

    PubMed

    Klinghammer, Mathias; Schütz, Immo; Blohm, Gunnar; Fiehler, Katja

    2016-12-01

    Previous research has demonstrated that humans use allocentric information when reaching to remembered visual targets, but most of the studies are limited to 2D space. Here, we study allocentric coding of memorized reach targets in 3D virtual reality. In particular, we investigated the use of allocentric information for memory-guided reaching in depth and the role of binocular and monocular (object size) depth cues for coding object locations in 3D space. To this end, we presented a scene with objects on a table which were located at different distances from the observer and served as reach targets or allocentric cues. After free visual exploration of this scene and a short delay the scene reappeared, but with one object missing (=reach target). In addition, the remaining objects were shifted horizontally or in depth. When objects were shifted in depth, we also independently manipulated object size by either magnifying or reducing their size. After the scene vanished, participants reached to the remembered target location on the blank table. Reaching endpoints deviated systematically in the direction of object shifts, similar to our previous results from 2D presentations. This deviation was stronger for object shifts in depth than in the horizontal plane and independent of observer-target-distance. Reaching endpoints systematically varied with changes in object size. Our results suggest that allocentric information is used for coding targets for memory-guided reaching in depth. Thereby, retinal disparity and vergence as well as object size provide important binocular and monocular depth cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Touch the table before the target: contact with an underlying surface may assist the development of precise visually controlled reach and grasp movements in human infants.

    PubMed

    Karl, Jenni M; Wilson, Alexis M; Bertoli, Marisa E; Shubear, Noor S

    2018-05-24

    Multiple motor channel theory posits that skilled hand movements arise from the coordinated activation of separable neural circuits in parietofrontal cortex, each of which produces a distinct movement and responds to different sensory inputs. Prehension, the act of reaching to grasp an object, consists of at least two movements: a reach movement that transports the hand to a target location and a grasp movement that shapes and closes the hand for target acquisition. During early development, discrete pre-reach and pre-grasp movements are refined based on proprioceptive and tactile feedback, but are gradually coordinated together into a singular hand preshaping movement under feedforward visual control. The neural and behavioural factors that enable this transition are currently unknown. In an attempt to identify such factors, the present descriptive study used frame-by-frame video analysis to examine 9-, 12-, and 15-month-old infants, along with sighted and unsighted adults, as they reached to grasp small ring-shaped pieces of cereal (Cheerios) resting on a table. Compared to sighted adults, infants and unsighted adults were more likely to make initial contact with the underlying table before they contacted the target. The way in which they did so was also similar in that they generally contacted the table with the tip of the thumb and/or pinky finger, a relatively open hand, and poor reach accuracy. Despite this, infants were similar to sighted adults in that they tended to use a pincer digit, defined as the tip of the thumb or index finger, to subsequently contact the target. Only in infants was this ability related to their having made prior contact with the underlying table. The results are discussed in relation to the idea that initial contact with an underlying table or surface may assist infants in learning to use feedforward visual control to direct their digits towards a precise visual target.

  3. The consummatory origins of visually guided reaching in human infants: a dynamic integration of whole-body and upper-limb movements.

    PubMed

    Foroud, Afra; Whishaw, Ian Q

    2012-06-01

    Reaching-to-eat (skilled reaching) is a natural behaviour that involves reaching for, grasping and withdrawing a target to be placed into the mouth for eating. It is an action performed daily by adults and is among the first complex behaviours to develop in infants. During development, visually guided reaching becomes increasingly refined to the point that grasping of small objects with precision grips of the digits occurs at about one year of age. Integration of the hand, upper-limbs, and whole body are required for successful reaching, but the ontogeny of this integration has not been described. The present longitudinal study used Laban Movement Analysis, a behavioural descriptive method, to investigate the developmental progression of the use and integration of axial, proximal, and distal movements performed during visually guided reaching. Four infants (from 7 to 40 weeks age) were presented with graspable objects (toys or food items). The first prereaching stage was associated with activation of mouth, limb, and hand movements to a visually presented target. Next, reaching attempts consisted of first, the advancement of the head with an opening mouth and then with the head, trunk and opening mouth. Eventually, the axial movements gave way to the refined action of one upper-limb supported by axial adjustments. These findings are discussed in relation to the biological objective of reaching, the evolutionary origins of reaching, and the decomposition of reaching after neurological injury. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Effect of visuomotor-map uncertainty on visuomotor adaptation.

    PubMed

    Saijo, Naoki; Gomi, Hiroaki

    2012-03-01

    Vision and proprioception contribute to generating hand movement. If a conflict between the visual and proprioceptive feedback of hand position is given, reaching movement is disturbed initially but recovers after training. Although previous studies have predominantly investigated the adaptive change in the motor output, it is unclear whether the contributions of visual and proprioceptive feedback controls to the reaching movement are modified by visuomotor adaptation. To investigate this, we focused on the change in proprioceptive feedback control associated with visuomotor adaptation. After the adaptation to gradually introduce visuomotor rotation, the hand reached the shifted position of the visual target to move the cursor to the visual target correctly. When the cursor feedback was occasionally eliminated (probe trial), the end point of the hand movement was biased in the visual-target direction, while the movement was initiated in the adapted direction, suggesting the incomplete adaptation of proprioceptive feedback control. Moreover, after the learning of uncertain visuomotor rotation, in which the rotation angle was randomly fluctuated on a trial-by-trial basis, the end-point bias in the probe trial increased, but the initial movement direction was not affected, suggesting a reduction in the adaptation level of proprioceptive feedback control. These results suggest that the change in the relative contribution of visual and proprioceptive feedback controls to the reaching movement in response to the visuomotor-map uncertainty is involved in visuomotor adaptation, whereas feedforward control might adapt in a manner different from that of the feedback control.

  5. Updating Target Location at the End of an Orienting Saccade Affects the Characteristics of Simple Point-to-Point Movements

    ERIC Educational Resources Information Center

    Desmurget, Michel; Turner, Robert S.; Prablanc, Claude; Russo, Gary S.; Alexander, Garret E.; Grafton, Scott T.

    2005-01-01

    Six results are reported. (a) Reaching accuracy increases when visual capture of the target is allowed (e.g., target on vs. target off at saccade onset). (b) Whatever the visual condition, trajectories diverge only after peak acceleration, suggesting that accuracy is improved through feedback mechanisms. (c) Feedback corrections are smoothly…

  6. Pivots for Pointing: Visually-Monitored Pointing Has Higher Arm Elevations than Pointing Blindfolded

    ERIC Educational Resources Information Center

    Wnuczko, Marta; Kennedy, John M.

    2011-01-01

    Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored…

  7. Goal-directed reaching: the allocentric coding of target location renders an offline mode of control.

    PubMed

    Manzone, Joseph; Heath, Matthew

    2018-04-01

    Reaching to a veridical target permits an egocentric spatial code (i.e., absolute limb and target position) to effect fast and effective online trajectory corrections supported via the visuomotor networks of the dorsal visual pathway. In contrast, a response entailing decoupled spatial relations between stimulus and response is thought to be primarily mediated via an allocentric code (i.e., the position of a target relative to another external cue) laid down by the visuoperceptual networks of the ventral visual pathway. Because the ventral stream renders a temporally durable percept, it is thought that an allocentric code does not support a primarily online mode of control, but instead supports a mode wherein a response is evoked largely in advance of movement onset via central planning mechanisms (i.e., offline control). Here, we examined whether reaches defined via ego- and allocentric visual coordinates are supported via distinct control modes (i.e., online versus offline). Participants performed target-directed and allocentric reaches in limb visible and limb-occluded conditions. Notably, in the allocentric task, participants reached to a location that matched the position of a target stimulus relative to a reference stimulus, and to examine online trajectory amendments, we computed the proportion of variance explained (i.e., R 2 values) by the spatial position of the limb at 75% of movement time relative to a response's ultimate movement endpoint. Target-directed trials performed with limb vision showed more online corrections and greater endpoint precision than their limb-occluded counterparts, which in turn were associated with performance metrics comparable to allocentric trials performed with and without limb vision. Accordingly, we propose that the absence of ego-motion cues (i.e., limb vision) and/or the specification of a response via an allocentric code renders motor output served via the 'slow' visuoperceptual networks of the ventral visual pathway.

  8. Subsystems of sensory attention for skilled reaching: vision for transport and pre-shaping and somatosensation for grasping, withdrawal and release.

    PubMed

    Sacrey, Lori-Ann R; Whishaw, Ian Q

    2012-06-01

    Skilled reaching is a forelimb movement in which a subject reaches for a piece of food that is placed in the mouth for eating. It is a natural movement used by many animal species and is a routine, daily activity for humans. Its prominent features include transport of the hand to a target, shaping the digits in preparation for grasping, grasping, and withdrawal of the hand to place the food in the mouth. Studies on normal human adults show that skilled reaching is mediated by at least two sensory attention processes. Hand transport to the target and hand shaping are temporally coupled with visual fixation on the target. Grasping, withdrawal, and placing the food into the mouth are associated with visual disengagement and somatosensory guidance. Studies on nonhuman animal species illustrate that shared visual and somatosensory attention likely evolved in the primate lineage. Studies on developing infants illustrate that shared attention requires both experience and maturation. Studies on subjects with Parkinson's disease and Huntington's disease illustrate that decomposition of shared attention also features compensatory visual guidance. The evolutionary, developmental, and neural control of skilled reaching suggests that associative learning processes are importantly related to normal adult attention sharing and so can be used in remediation. The economical use of sensory attention in the different phases of skilled reaching ensures efficiency in eating, reduces sensory interference between sensory reference frames, and provides efficient neural control of the advance and withdrawal components of skilled reaching movements. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Independent development of the Reach and the Grasp in spontaneous self-touching by human infants in the first 6 months.

    PubMed

    Thomas, Brittany L; Karl, Jenni M; Whishaw, Ian Q

    2014-01-01

    The Dual Visuomotor Channel Theory proposes that visually guided reaching is a composite of two movements, a Reach that advances the hand to contact the target and a Grasp that shapes the digits for target purchase. The theory is supported by biometric analyses of adult reaching, evolutionary contrasts, and differential developmental patterns for the Reach and the Grasp in visually guided reaching in human infants. The present ethological study asked whether there is evidence for a dissociated development for the Reach and the Grasp in nonvisual hand use in very early infancy. The study documents a rich array of spontaneous self-touching behavior in infants during the first 6 months of life and subjected the Reach movements to an analysis in relation to body target, contact type, and Grasp. Video recordings were made of resting alert infants biweekly from birth to 6 months. In younger infants, self-touching targets included the head and trunk. As infants aged, targets became more caudal and included the hips, then legs, and eventually the feet. In younger infants hand contact was mainly made with the dorsum of the hand, but as infants aged, contacts included palmar contacts and eventually grasp and manipulation contacts with the body and clothes. The relative incidence of caudal contacts and palmar contacts increased concurrently and were significantly correlated throughout the period of study. Developmental increases in self-grasping contacts occurred a few weeks after the increase in caudal and palmar contacts. The behavioral and temporal pattern of these spontaneous self-touching movements suggest that the Reach, in which the hand extends to make a palmar self-contact, and the Grasp, in which the digits close and make manipulatory movements, have partially independent developmental profiles. The results additionally suggest that self-touching behavior is an important developmental phase that allows the coordination of the Reach and the Grasp prior to and concurrent with their use under visual guidance.

  10. Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information.

    PubMed

    Strauss, Soeren; Woodgate, Philip J W; Sami, Saber A; Heinke, Dietmar

    2015-12-01

    We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain's attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO's predictions and also lessons for neurobiologically inspired robotics emerging from this work. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  11. Memory-guided reaching in a patient with visual hemiagnosia.

    PubMed

    Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc

    2016-06-01

    The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A Robotics-Based Approach to Modeling of Choice Reaching Experiments on Visual Attention

    PubMed Central

    Strauss, Soeren; Heinke, Dietmar

    2012-01-01

    The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements toward a target in an odd-color search task, i.e., reaching for a green square among red squares and vice versa (e.g., Song and Nakayama, 2008). Interestingly these studies found that in a high number of trials movements were initially directed toward a distractor and only later were adjusted toward the target. These “curved” trajectories occurred particularly frequently when the target in the directly preceding trial had a different color (priming effect). Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke and Humphreys, 2003; Heinke and Backhaus, 2011). To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen and Schoener, 2002). The robot arm is able to mimic the results of the odd-color search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper. PMID:22529827

  13. Expectation affects verbal judgments but not reaches to visually perceived egocentric distances.

    PubMed

    Pagano, Christopher C; Isenhower, Robert W

    2008-04-01

    Two response measures for reporting visually perceived egocentric distances-verbal judgments and blind manual reaches-were compared using a within-trial methodology. The expected range of possible target distances was manipulated by instructing the subjects that the targets would be between .50 and 1.00 of their maximum arm reach in one session and between .25 and .90 in another session. The actual range of target distances was always .50-.90. Verbal responses varied as a function of the range of expected distances, whereas simultaneous reaches remained unaffected. These results suggest that verbal responses are subject to a cognitive influence that does not affect actions. It is suggested that action responses are indicative of absolute perception, whereas cognitive responses may reflect only relative perception. The results also indicate that the dependant variable utilized for the study of depth perception will influence the obtained results.

  14. Effects of Anisometropic Amblyopia on Visuomotor Behavior, Part 2: Visually Guided Reaching

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Hirji, Zahra; Crawford, J. Douglas; Wong, Agnes M. F.

    2016-01-01

    Purpose The effects of impaired spatiotemporal vision in amblyopia on visuomotor skills have rarely been explored in detail. The goal of this study was to examine the influences of amblyopia on visually guided reaching. Methods Fourteen patients with anisometropic amblyopia and 14 control subjects were recruited. Participants executed reach-to-touch movements toward targets presented randomly 5° or 10° to the left or right of central fixation in three viewing conditions: binocular, monocular amblyopic eye, and monocular fellow eye viewing (left and right monocular viewing for control subjects). Visual feedback of the target was removed on 50% of the trials at the initiation of reaching. Results Reaching accuracy was comparable between patients and control subjects during all three viewing conditions. Patients’ reaching responses were slightly less precise during amblyopic eye viewing, but their precision was normal during binocular or fellow eye viewing. Reaching reaction time was not affected by amblyopia. The duration of the acceleration phase was longer in patients than in control subjects under all viewing conditions, whereas the duration of the deceleration phase was unaffected. Peak acceleration and peak velocity were also reduced in patients. Conclusions Amblyopia affects both the programming and the execution of visually guided reaching. The increased duration of the acceleration phase, as well as the reduced peak acceleration and peak velocity, might reflect a strategy or adaptation of feedforward/feedback control of the visuomotor system to compensate for degraded spatiotemporal vision in amblyopia, allowing patients to optimize their reaching performance. PMID:21051723

  15. Eye-Hand Coordination during Visuomotor Adaptation with Different Rotation Angles: Effects of Terminal Visual Feedback

    PubMed Central

    Rand, Miya K.; Rentsch, Sebastian

    2016-01-01

    This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback. PMID:27812093

  16. Inhibition in movement plan competition: reach trajectories curve away from remembered and task-irrelevant present but not from task-irrelevant past visual stimuli.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2017-11-01

    The current study investigated the role of automatic encoding and maintenance of remembered, past, and present visual distractors for reach movement planning. The previous research on eye movements showed that saccades curve away from locations actively kept in working memory and also from task-irrelevant perceptually present visual distractors, but not from task-irrelevant past distractors. Curvature away has been associated with an inhibitory mechanism resolving the competition between multiple active movement plans. Here, we examined whether reach movements underlie a similar inhibitory mechanism and thus show systematic modulation of reach trajectories when the location of a previously presented distractor has to be (a) maintained in working memory or (b) ignored, or (c) when the distractor is perceptually present. Participants performed vertical reach movements on a computer monitor from a home to a target location. Distractors appeared laterally and near or far from the target (equidistant from central fixation). We found that reaches curved away from the distractors located close to the target when the distractor location had to be memorized and when it was perceptually present, but not when the past distractor had to be ignored. Our findings suggest that automatically encoding present distractors and actively maintaining the location of past distractors in working memory evoke a similar response competition resolved by inhibition, as has been previously shown for saccadic eye movements.

  17. Effect of visual field presentation on action planning (estimating reach) in children.

    PubMed

    Gabbard, Carl; Cordova, Alberto

    2012-01-01

    In this article, the authors examined the effects of target information presented in different visual fields (lower, upper, central) on estimates of reach via use of motor imagery in children (5-11 years old) and young adults. Results indicated an advantage for estimating reach movements for targets placed in lower visual field (LoVF), with all groups having greater difficulty in the upper visual field (UpVF) condition, especially 5- and 7-year-olds. Complementing these results was an overall age-related increase in accuracy. Based in part on the equivalence hypothesis suggesting that motor imagery and motor planning and execution are similar, the findings support previous work of executed behaviors showing that there is a LoVF bias for motor skill actions of the hand. Given that previous research hints that the UpVF may be bias for visuospatial (perceptual) qualities, research in that area and its association with visuomotor processing (LoVF) should be considered.

  18. Helping Children with Visual and Motor Impairments Make the Most of Their Visual Abilities.

    ERIC Educational Resources Information Center

    Amerson, Marie J.

    1999-01-01

    Lists strategies for promoting functional vision use in children with visual and motor impairments, including providing postural stability, presenting visual attention tasks when energy level is the highest, using a slanted work surface, placing target items in varied locations within reach, and determining the most effective visual adaptations.…

  19. Impact of online visual feedback on motor acquisition and retention when learning to reach in a force field.

    PubMed

    Batcho, C S; Gagné, M; Bouyer, L J; Roy, J S; Mercier, C

    2016-11-19

    When subjects learn a novel motor task, several sources of feedback (proprioceptive, visual or auditory) contribute to the performance. Over the past few years, several studies have investigated the role of visual feedback in motor learning, yet evidence remains conflicting. The aim of this study was therefore to investigate the role of online visual feedback (VFb) on the acquisition and retention stages of motor learning associated with training in a reaching task. Thirty healthy subjects made ballistic reaching movements with their dominant arm toward two targets, on 2 consecutive days using a robotized exoskeleton (KINARM). They were randomly assigned to a group with (VFb) or without (NoVFb) VFb of index position during movement. On day 1, the task was performed before (baseline) and during the application of a velocity-dependent resistive force field (adaptation). To assess retention, participants repeated the task with the force field on day 2. Motor learning was characterized by: (1) the final endpoint error (movement accuracy) and (2) the initial angle (iANG) of deviation (motor planning). Even though both groups showed motor adaptation, the NoVFb-group exhibited slower learning and higher final endpoint error than the VFb-group. In some condition, subjects trained without visual feedback used more curved initial trajectories to anticipate for the perturbation. This observation suggests that learning to reach targets in a velocity-dependent resistive force field is possible even when feedback is limited. However, the absence of VFb leads to different strategies that were only apparent when reaching toward the most challenging target. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Haptic guidance of overt visual attention.

    PubMed

    List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru

    2014-11-01

    Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.

  1. Neuronal responses to target onset in oculomotor and somatomotor parietal circuits differ markedly in a choice task.

    PubMed

    Kubanek, J; Wang, C; Snyder, L H

    2013-11-01

    We often look at and sometimes reach for visible targets. Looking at a target is fast and relatively easy. By comparison, reaching for an object is slower and is associated with a larger cost. We hypothesized that, as a result of these differences, abrupt visual onsets may drive the circuits involved in saccade planning more directly and with less intermediate regulation than the circuits involved in reach planning. To test this hypothesis, we recorded discharge activity of neurons in the parietal oculomotor system (area LIP) and in the parietal somatomotor system (area PRR) while monkeys performed a visually guided movement task and a choice task. We found that in the visually guided movement task LIP neurons show a prominent transient response to target onset. PRR neurons also show a transient response, although this response is reduced in amplitude, is delayed, and has a slower rise time compared with LIP. A more striking difference is observed in the choice task. The transient response of PRR neurons is almost completely abolished and replaced with a slow buildup of activity, while the LIP response is merely delayed and reduced in amplitude. Our findings suggest that the oculomotor system is more closely and obligatorily coupled to the visual system, whereas the somatomotor system operates in a more discriminating manner.

  2. A Computational Model for Aperture Control in Reach-to-Grasp Movement Based on Predictive Variability

    PubMed Central

    Takemura, Naohiro; Fukui, Takao; Inui, Toshio

    2015-01-01

    In human reach-to-grasp movement, visual occlusion of a target object leads to a larger peak grip aperture compared to conditions where online vision is available. However, no previous computational and neural network models for reach-to-grasp movement explain the mechanism of this effect. We simulated the effect of online vision on the reach-to-grasp movement by proposing a computational control model based on the hypothesis that the grip aperture is controlled to compensate for both motor variability and sensory uncertainty. In this model, the aperture is formed to achieve a target aperture size that is sufficiently large to accommodate the actual target; it also includes a margin to ensure proper grasping despite sensory and motor variability. To this end, the model considers: (i) the variability of the grip aperture, which is predicted by the Kalman filter, and (ii) the uncertainty of the object size, which is affected by visual noise. Using this model, we simulated experiments in which the effect of the duration of visual occlusion was investigated. The simulation replicated the experimental result wherein the peak grip aperture increased when the target object was occluded, especially in the early phase of the movement. Both predicted motor variability and sensory uncertainty play important roles in the online visuomotor process responsible for grip aperture control. PMID:26696874

  3. Effects of strabismic amblyopia and strabismus without amblyopia on visuomotor behavior: III. Temporal eye-hand coordination during reaching.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2014-11-11

    To examine the effects of strabismic amblyopia and strabismus only, without amblyopia, on the temporal patterns of eye-hand coordination during both the planning and execution stages of visually-guided reaching. Forty-six adults (16 with strabismic amblyopia, 14 with strabismus only, and 16 visually normal) executed reach-to-touch movements toward targets presented randomly 5° or 10° to the left or right of central fixation. Viewing conditions were binocular, monocular viewing with the amblyopic eye, and monocular viewing with the fellow eye (dominant and nondominant viewing for participants without amblyopia). Temporal coordination between eye and hand movements was examined during reach planning (interval between the initiation of saccade and reaching, i.e., saccade-to-reach planning interval) and reach execution (interval between the initiation of saccade and reach peak velocity [PV], i.e., saccade-to-reach PV interval). The frequency and dynamics of secondary reach-related saccades were also examined. The temporal patterns of eye-hand coordination prior to reach initiation were comparable among participants with strabismic amblyopia, strabismus only, and visually normal adults. However, the reach acceleration phase of participants with strabismic amblyopia and those with strabismus only were longer following target fixation (saccade-to-reach PV interval) than that of visually normal participants (P < 0.05). This effect was evident under all viewing conditions. The saccade-to-reach planning interval and the saccade-to-reach PV interval were not significantly different among participants with amblyopia with different levels of acuity and stereo acuity loss. Participants with strabismic amblyopia and strabismus only initiated secondary reach-related saccades significantly more frequently than visually normal participants. The amplitude and peak velocity of these saccades were significantly greater during amblyopic eye viewing in participants with amblyopia who also had negative stereopsis. Adults with strabismic amblyopia and strabismus only showed an altered pattern of temporal eye-hand coordination during the reach acceleration phase, which might affect their ability to modify reach trajectory using early online control. Secondary reach-related saccades may provide a compensatory mechanism with which to facilitate the late online control process in order to ensure relatively good reaching performance during binocular and fellow eye viewing. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  4. Spatial updating depends on gaze direction even after loss of vision.

    PubMed

    Reuschel, Johanna; Rösler, Frank; Henriques, Denise Y P; Fiehler, Katja

    2012-02-15

    Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.

  5. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements

    PubMed Central

    Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512

  6. Allocentrically implied target locations are updated in an eye-centred reference frame.

    PubMed

    Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P

    2012-04-18

    When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Changes to online control and eye-hand coordination with healthy ageing.

    PubMed

    O'Rielly, Jessica L; Ma-Wyatt, Anna

    2018-06-01

    Goal directed movements are typically accompanied by a saccade to the target location. Online control plays an important part in correction of a reach, especially if the target or goal of the reach moves during the reach. While there are notable changes to visual processing and motor control with healthy ageing, there is limited evidence about how eye-hand coordination during online updating changes with healthy ageing. We sought to quantify differences between older and younger people for eye-hand coordination during online updating. Participants completed a double step reaching task implemented under time pressure. The target perturbation could occur 200, 400 and 600 ms into a reach. We measured eye position and hand position throughout the trials to investigate changes to saccade latency, movement latency, movement time, reach characteristics and eye-hand latency and accuracy. Both groups were able to update their reach in response to a target perturbation that occurred at 200 or 400 ms into the reach. All participants demonstrated incomplete online updating for the 600 ms perturbation time. Saccade latencies, measured from the first target presentation, were generally longer for older participants. Older participants had significantly increased movement times but there was no significant difference between groups for touch accuracy. We speculate that the longer movement times enable the use of new visual information about the target location for online updating towards the end of the movement. Interestingly, older participants also produced a greater proportion of secondary saccades within the target perturbation condition and had generally shorter eye-hand latencies. This is perhaps a compensatory mechanism as there was no significant group effect on final saccade accuracy. Overall, the pattern of results suggests that online control of movements may be qualitatively different in older participants. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  8. Visuomotor Map Determines How Visually Guided Reaching Movements are Corrected Within and Across Trials123

    PubMed Central

    Hirashima, Masaya

    2016-01-01

    Abstract When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation. PMID:27275006

  9. Visuomotor Map Determines How Visually Guided Reaching Movements are Corrected Within and Across Trials.

    PubMed

    Hayashi, Takuji; Yokoi, Atsushi; Hirashima, Masaya; Nozaki, Daichi

    2016-01-01

    When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation.

  10. Control of aperture closure initiation during reach-to-grasp movements under manipulations of visual feedback and trunk involvement in Parkinson's disease.

    PubMed

    Rand, Miya Kato; Lemay, Martin; Squire, Linda M; Shimansky, Yury P; Stelmach, George E

    2010-03-01

    The present project was aimed at investigating how two distinct and important difficulties (coordination difficulty and pronounced dependency on visual feedback) in Parkinson's disease (PD) affect each other for the coordination between hand transport toward an object and the initiation of finger closure during reach-to-grasp movement. Subjects with PD and age-matched healthy subjects made reach-to-grasp movements to a dowel under conditions in which the target object and/or the hand were either visible or not visible. The involvement of the trunk in task performance was manipulated by positioning the target object within or beyond the participant's outstretched arm to evaluate the effects of increasing the complexity of intersegmental coordination under different conditions related to the availability of visual feedback in subjects with PD. General kinematic characteristics of the reach-to-grasp movements of the subjects with PD were altered substantially by the removal of target object visibility. Compared with the controls, the subjects with PD considerably lengthened transport time, especially during the aperture closure period, and decreased peak velocity of wrist and trunk movement without target object visibility. Most of these differences were accentuated when the trunk was involved. In contrast, these kinematic parameters did not change depending on the visibility of the hand for both groups. The transport-aperture coordination was assessed in terms of the control law according to which the initiation of aperture closure during the reach occurred when the hand distance-to-target crossed a hand-target distance threshold for grasp initiation that is a function of peak aperture, hand velocity and acceleration, trunk velocity and acceleration, and trunk-target distance at the time of aperture closure initiation. When the hand or the target object was not visible, both groups increased the hand-target distance threshold for grasp initiation compared to its value under full visibility, implying an increase in the hand-target distance-related safety margin for grasping. The increase in the safety margin due to the absence of target object vision or the absence of hand vision was accentuated in the subjects with PD compared to that in the controls. The pronounced increase in the safety margin due to absence of target object vision for the subjects with PD was further accentuated when the trunk was involved compared to when it was not involved. The results imply that individuals with PD have significant limitations regarding neural computations required for efficient utilization of internal representations of target object location and hand motion as well as proprioceptive information about the hand to compensate for the lack of visual information during the performance of complex multisegment movements.

  11. Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach.

    PubMed

    Byrne, Patrick A; Crawford, J Douglas

    2010-06-01

    It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.

  12. Brain systems for visual perspective taking and action perception.

    PubMed

    Mazzarella, Elisabetta; Ramsey, Richard; Conson, Massimiliano; Hamilton, Antonia

    2013-01-01

    Taking another person's viewpoint and making sense of their actions are key processes that guide social behavior. Previous neuroimaging investigations have largely studied these processes separately. The current study used functional magnetic resonance imaging to examine how the brain incorporates another person's viewpoint and actions into visual perspective judgments. Participants made a left-right judgment about the location of a target object from their own (egocentric) or an actor's visual perspective (altercentric). Actor location varied around a table and the actor was either reaching or not reaching for the target object. Analyses examined brain regions engaged in the egocentric and altercentric tasks, brain regions where response magnitude tracked the orientation of the actor in the scene and brain regions sensitive to the action performed by the actor. The blood oxygen level-dependent (BOLD) response in dorsomedial prefrontal cortex (dmPFC) was sensitive to actor orientation in the altercentric task, whereas the response in right inferior frontal gyrus (IFG) was sensitive to actor orientation in the egocentric task. Thus, dmPFC and right IFG may play distinct but complementary roles in visual perspective taking (VPT). Observation of a reaching actor compared to a non-reaching actor yielded activation in lateral occipitotemporal cortex, regardless of task, showing that these regions are sensitive to body posture independent of social context. By considering how an observed actor's location and action influence the neural bases of visual perspective judgments, the current study supports the view that multiple neurocognitive "routes" operate during VPT.

  13. Reach Trajectories Characterize Tactile Localization for Sensorimotor Decision Making.

    PubMed

    Brandes, Janina; Heed, Tobias

    2015-10-07

    Spatial target information for movement planning appears to be coded in a gaze-centered reference frame. In touch, however, location is initially coded with reference to the skin. Therefore, the tactile spatial location must be derived by integrating skin location and posture. It has been suggested that this recoding is impaired when the limb is placed in the opposite hemispace, for example, by limb crossing. Here, human participants reached toward visual and tactile targets located at uncrossed and crossed feet in a sensorimotor decision task. We characterized stimulus recoding by analyzing the timing and spatial profile of hand reaches. For tactile targets at crossed feet, skin-based information implicates the incorrect side, and only recoded information points to the correct location. Participants initiated straight reaches and redirected the hand toward a target presented in midflight. Trajectories to visual targets were unaffected by foot crossing. In contrast, trajectories to tactile targets were redirected later with crossed than uncrossed feet. Reaches to crossed feet usually continued straight until they were directed toward the correct tactile target and were not biased toward the skin-based target location. Occasional, far deflections toward the incorrect target were most likely when this target was implicated by trial history. These results are inconsistent with the suggestion that spatial transformations in touch are impaired by limb crossing, but are consistent with tactile location being recoded rapidly and efficiently, followed by integration of skin-based and external information to specify the reach target. This process may be implemented in a bounded integrator framework. How do you touch yourself, for instance, to scratch an itch? The place you need to reach is defined by a sensation on the skin, but our bodies are flexible, so this skin location could be anywhere in 3D space. The movement toward the tactile sensation must therefore be specified by merging skin location and body posture. By investigating human hand reach trajectories toward tactile stimuli on the feet, we provide experimental evidence that this transformation process is quick and efficient, and that its output is integrated with the original skin location in a fashion consistent with bounded integrator decision-making frameworks. Copyright © 2015 the authors 0270-6474/15/3513648-11$15.00/0.

  14. Inactivation of Parietal Reach Region Affects Reaching But Not Saccade Choices in Internally Guided Decisions.

    PubMed

    Christopoulos, Vassilios N; Bonaiuto, James; Kagan, Igor; Andersen, Richard A

    2015-08-19

    The posterior parietal cortex (PPC) has traditionally been considered important for awareness, spatial perception, and attention. However, recent findings provide evidence that the PPC also encodes information important for making decisions. These findings have initiated a running argument of whether the PPC is critically involved in decision making. To examine this issue, we reversibly inactivated the parietal reach region (PRR), the area of the PPC that is specialized for reaching movements, while two monkeys performed a memory-guided reaching or saccade task. The task included choices between two equally rewarded targets presented simultaneously in opposite visual fields. Free-choice trials were interleaved with instructed trials, in which a single cue presented in the peripheral visual field defined the reach and saccade target unequivocally. We found that PRR inactivation led to a strong reduction of contralesional choices, but only for reaches. On the other hand, saccade choices were not affected by PRR inactivation. Importantly, reaching and saccade movements to single instructed targets remained largely intact. These results cannot be explained as an effector-nonspecific deficit in spatial attention or awareness, since the temporary "lesion" had an impact only on reach choices. Hence, the PPR is a part of a network for reach decisions and not just reach planning. There has been an ongoing debate on whether the posterior parietal cortex (PPC) represents only spatial awareness, perception, and attention or whether it is also involved in decision making for actions. In this study we explore whether the parietal reach region (PRR), the region of the PPC that is specialized for reaches, is involved in the decision process. We inactivated the PRR while two monkeys performed reach and saccade choices between two targets presented simultaneously in both hemifields. We found that inactivation affected only the reach choices, while leaving saccade choices intact. These results cannot be explained as a deficit in attention, since the temporary lesion affected only the reach choices. Thus, PRR is a part of a network for making reach decisions. Copyright © 2015 the authors 0270-6474/15/3511719-10$15.00/0.

  15. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644

  16. Pivots for pointing: visually-monitored pointing has higher arm elevations than pointing blindfolded.

    PubMed

    Wnuczko, Marta; Kennedy, John M

    2011-10-01

    Observers pointing to a target viewed directly may elevate their fingertip close to the line of sight. However, pointing blindfolded, after viewing the target, they may pivot lower, from the shoulder, aligning the arm with the target as if reaching to the target. Indeed, in Experiment 1 participants elevated their arms more in visually monitored than blindfolded pointing. In Experiment 2, pointing to a visible target they elevated a short pointer more than a long one, raising its tip to the line of sight. In Experiment 3, the Experimenter aligned the participant's arm with the target. Participants judged they were pointing below a visually monitored target. In Experiment 4, participants viewing another person pointing, eyes-open or eyes-closed, judged the target was aligned with the pointing arm. In Experiment 5, participants viewed their arm and the target via a mirror and posed their arm so that it was aligned with the target. Arm elevation was higher in pointing directly.

  17. Perceived Reachability in Hemispace

    ERIC Educational Resources Information Center

    Gabbard, C.; Ammar, D.; Rodrigues, L.

    2005-01-01

    A common observation in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate. Of the studies noted, reaching tasks have been presented in the general midline range. In the present study, strong right-handers were asked to judge the reachability of visual targets projected onto a table…

  18. Perceived reachability in hemispace.

    PubMed

    Gabbard, Carl; Ammar, Diala; Rodrigues, Luis

    2005-07-01

    A common observation in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate. Of the studies noted, reaching tasks have been presented in the general midline range. In the present study, strong right-handers were asked to judge the reachability of visual targets projected onto a table surface at midline, right- (RVF), and left-visual fields (LVF). Midline results support those of previous studies, showing an overestimation bias. In contrast, participants revealed the tendency to underestimate their reachability in RVF and LVF. These findings are discussed from the perspective of actor 'confidence' (a cognitive state) possibly associated with visual information, perceived ability, and perceived task demands.

  19. Gunslinger Effect and Müller-Lyer Illusion: Examining Early Visual Information Processing for Late Limb-Target Control.

    PubMed

    Roberts, James W; Lyons, James; Garcia, Daniel B L; Burgess, Raquel; Elliott, Digby

    2017-07-01

    The multiple process model contends that there are two forms of online control for manual aiming: impulse regulation and limb-target control. This study examined the impact of visual information processing for limb-target control. We amalgamated the Gunslinger protocol (i.e., faster movements following a reaction to an external trigger compared with the spontaneous initiation of movement) and Müller-Lyer target configurations into the same aiming protocol. The results showed the Gunslinger effect was isolated at the early portions of the movement (peak acceleration and peak velocity). Reacted aims reached a longer displacement at peak deceleration, but no differences for movement termination. The target configurations manifested terminal biases consistent with the illusion. We suggest the visual information processing demands imposed by reacted aims can be adapted by integrating early feedforward information for limb-target control.

  20. Inactivation of the Parietal Reach Region Causes Optic Ataxia, Impairing Reaches but Not Saccades

    PubMed Central

    Hwang, Eun Jung; Hauschild, Markus; Wilke, Melanie; Andersen, Richard A.

    2013-01-01

    SUMMARY Lesions in human posterior parietal cortex can cause optic ataxia (OA), in which reaches but not saccades to visual objects are impaired, suggesting separate visuomotor pathways for the two effectors. In monkeys, one potentially crucial area for reach control is the parietal reach region (PRR), in which neurons respond preferentially during reach planning as compared to saccade planning. However, direct causal evidence linking the monkey PRR to the deficits observed in OA is missing. We thus inactivated part of the macaque PRR, in the medial wall of the intraparietal sulcus, and produced the hallmarks of OA, misreaching for peripheral targets but unimpaired saccades. Furthermore, reach errors were larger for the targets preferred by the neural population local to the injection site. These results demonstrate that PRR is causally involved in reach-specific visuomotor pathways, and reach goal disruption in PRR can be a neural basis of OA. PMID:23217749

  1. Estimation of reach in peripersonal and extrapersonal space: a developmental view.

    PubMed

    Gabbard, Carl; Cordova, Alberto; Ammar, Diala

    2007-01-01

    This study explored the developmental nature of action processing via estimation of reach in peripersonal and extrapersonal space. Children 5 to 11 years of age and adults were tested for estimates of reach to targets presented randomly at seven midline locations. Target distances were scaled to the individual based on absolute maximum reach. While there was no difference between age groups for total error, a significant distinction emerged in reference to space. With children, significantly more error was exhibited in extrapersonal space; no difference was found with adults. The groups did not differ in peripersonal space; however, adults were substantially more accurate with extrapersonal targets. Furthermore, children displayed a greater tendency to overestimate. In essence, these data reveal a body-scaling problem in children in estimating reach in extrapersonal space. Future work should focus on possible developmental differences in use of visual information and state of confidence.

  2. Adaptation to Coriolis force perturbation of movement trajectory; role of proprioceptive and cutaneous somatosensory feedback

    NASA Technical Reports Server (NTRS)

    Lackner, James R.; DiZio, Paul

    2002-01-01

    Subjects exposed to constant velocity rotation in a large fully-enclosed room that rotates initially make large reaching errors in pointing to targets. The paths and endpoints of their reaches are deviated in the direction of the transient lateral Coriolis forces generated by the forward velocity of their reaches. With additional reaches, subjects soon reach in straighter paths and become more accurate at landing on target even in the absence of visual feedback about their movements. Two factors contribute to this adaptation: first, muscle spindle and golgi tendon organ feedback interpreted in relation to efferent commands provide information about movement trajectory, and second, somatosensory stimulation of the fingertip at the completion of a reach provides information about the location of the fingertip relative to the torso.

  3. Reaching for the Unreachable: Reorganization of Reaching with Walking

    PubMed Central

    Grzyb, Beata J.; Smith, Linda B.; del Pobil, Angel P.

    2015-01-01

    Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared nonwalkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Nonwalkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking. PMID:26110046

  4. Spatial Context and Visual Perception for Action

    ERIC Educational Resources Information Center

    Coello, Yann

    2005-01-01

    In this paper, evidences that visuo-spatial perception in the peri-personal space is not an abstract, disembodied phenomenon but is rather shaped by action constraints are reviewed. Locating a visual target with the intention of reaching it requires that the relevant spatial information is considered in relation with the body-part that will be…

  5. Pilot study to test effectiveness of video game on reaching performance in stroke.

    PubMed

    Acosta, Ana Maria; Dewald, Hendrik A; Dewald, Jules P A

    2011-01-01

    Robotic systems currently used in upper-limb rehabilitation following stroke rely on some form of visual feedback as part of the intervention program. We evaluated the effect of a video game environment (air hockey) on reaching in stroke with various levels of arm support. We used the Arm Coordination Training 3D system to provide variable arm support and to control the hockey stick. We instructed seven subjects to reach to one of three targets covering the workspace of the impaired arm during the reaching task and to reach as far as possible while playing the video game. The results from this study showed that across subjects, support levels, and targets, the reaching distances achieved with the reaching task were greater than those covered with the video game. This held even after further restricting the mapped workspace of the arm to the area most affected by the flexion synergy (effectively forcing subjects to fight the synergy to reach the hockey puck). The results from this study highlight the importance of designing video games that include specific reaching targets in the workspace compromised by the expression of the flexion synergy. Such video games would also adapt the target location online as a subject's success rate increases.

  6. Pilot study to test effectiveness of video game on reaching performance in stroke

    PubMed Central

    Acosta, Ana Maria; Dewald, Hendrik A.; Dewald, Jules P. A.

    2012-01-01

    Robotic systems currently used in upper-limb rehabilitation following stroke rely on some form of visual feedback as part of the intervention program. We evaluated the effect of a video game environment (air hockey) on reaching in stroke with various levels of arm support. We used the Arm Coordination Training 3D system to provide variable arm support and to control the hockey stick. We instructed seven subjects to reach to one of three targets covering the workspace of the impaired arm during the reaching task and to reach as far as possible while playing the video game. The results from this study showed that across subjects, support levels, and targets, the reaching distances achieved with the reaching task were greater than those covered with the video game. This held even after further restricting the mapped workspace of the arm to the area most affected by the flexion synergy (effectively forcing subjects to fight the synergy to reach the hockey puck). The results from this study highlight the importance of designing video games that include specific reaching targets in the workspace compromised by the expression of the flexion synergy. Such video games would also adapt the target location online as a subject’s success rate increases. PMID:21674392

  7. Motor imagery in reaching: is there a left-hemispheric advantage?

    PubMed

    Gabbard, Carl; Ammar, Diala; Rodrigues, Luis

    2005-06-01

    The study of motor imagery affords an attractive approach in the quest to identify the specific aspects of cognitive and neuromotor mechanisms and relationship involved in action processing. Here, the authors investigated the recently reported finding that compared to the left-hemisphere, the right brain is at a significant disadvantage for mentally simulating reaching movements. The authors investigated this observation with strong right-handers that were asked to estimate the imagined reachability of visual targets (presented at 150 ms) at multiple points at midline, right- and left visual field; responses were compared to actual maximum reaching distance. Results indicated that individuals are relatively accurate at imagined reachability, with no significant distinction between visual field responses. Therefore, these data provide no evidence to support the claim that the right hemisphere is significantly inferior to the left hemisphere in estimations of motor imagery for reaching. The authors do acknowledge differences in the experimental task and subject characteristics compared to earlier work using split-brain and stroke patients.

  8. Target size matters: target errors contribute to the generalization of implicit visuomotor learning.

    PubMed

    Reichenthal, Maayan; Avraham, Guy; Karniel, Amir; Shmuelof, Lior

    2016-08-01

    The process of sensorimotor adaptation is considered to be driven by errors. While sensory prediction errors, defined as the difference between the planned and the actual movement of the cursor, drive implicit learning processes, target errors (e.g., the distance of the cursor from the target) are thought to drive explicit learning mechanisms. This distinction was mainly studied in the context of arm reaching tasks where the position and the size of the target were constant. We hypothesize that in a dynamic reaching environment, where subjects have to hit moving targets and the targets' dynamic characteristics affect task success, implicit processes will benefit from target errors as well. We examine the effect of target errors on learning of an unnoticed perturbation during unconstrained reaching movements. Subjects played a Pong game, in which they had to hit a moving ball by moving a paddle controlled by their hand. During the game, the movement of the paddle was gradually rotated with respect to the hand, reaching a final rotation of 25°. Subjects were assigned to one of two groups: The high-target error group played the Pong with a small ball, and the low-target error group played with a big ball. Before and after the Pong game, subjects performed open-loop reaching movements toward static targets with no visual feedback. While both groups adapted to the rotation, the postrotation reaching movements were directionally biased only in the small-ball group. This result provides evidence that implicit adaptation is sensitive to target errors. Copyright © 2016 the American Physiological Society.

  9. Two wrongs make a right: linear increase of accuracy of visually-guided manual pointing, reaching, and height-matching with increase in hand-to-body distance.

    PubMed

    Li, Wenxun; Matin, Leonard

    2005-03-01

    Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.

  10. Viewer-centered and body-centered frames of reference in direct visuomotor transformations.

    PubMed

    Carrozzo, M; McIntyre, J; Zago, M; Lacquaniti, F

    1999-11-01

    It has been hypothesized that the end-point position of reaching may be specified in an egocentric frame of reference. In most previous studies, however, reaching was toward a memorized target, rather than an actual target. Thus, the role played by sensorimotor transformation could not be disassociated from the role played by storage in short-term memory. In the present study the direct process of sensorimotor transformation was investigated in reaching toward continuously visible targets that need not be stored in memory. A virtual reality system was used to present visual targets in different three-dimensional (3D) locations in two different tasks, one with visual feedback of the hand and arm position (Seen Hand) and the other without such feedback (Unseen Hand). In the Seen Hand task, the axes of maximum variability and of maximum contraction converge toward the mid-point between the eyes. In the Unseen Hand task only the maximum contraction correlates with the sight-line and the axes of maximum variability are not viewer-centered but rotate anti-clockwise around the body and the effector arm during the move from the right to the left workspace. The bulk of findings from these and previous experiments support the hypothesis of a two-stage process, with a gradual transformation from viewer-centered to body-centered and arm-centered coordinates. Retinal, extra-retinal and arm-related signals appear to be progressively combined in superior and inferior parietal areas, giving rise to egocentric representations of the end-point position of reaching.

  11. Decision theory, motor planning, and visual memory: deciding where to reach when memory errors are costly.

    PubMed

    Lerch, Rachel A; Sims, Chris R

    2016-06-01

    Limitations in visual working memory (VWM) have been extensively studied in psychophysical tasks, but not well understood in terms of how these memory limits translate to performance in more natural domains. For example, in reaching to grasp an object based on a spatial memory representation, overshooting the intended target may be more costly than undershooting, such as when reaching for a cup of hot coffee. The current body of literature lacks a detailed account of how the costs or consequences of memory error influence what we encode in visual memory and how we act on the basis of remembered information. Here, we study how externally imposed monetary costs influence behavior in a motor decision task that involves reach planning based on recalled information from VWM. We approach this from a decision theoretic perspective, viewing decisions of where to aim in relation to the utility of their outcomes given the uncertainty of memory representations. Our results indicate that subjects accounted for the uncertainty in their visual memory, showing a significant difference in their reach planning when monetary costs were imposed for memory errors. However, our findings indicate that subjects memory representations per se were not biased by the imposed costs, but rather subjects adopted a near-optimal post-mnemonic decision strategy in their motor planning.

  12. Neural Substrates of Visual Spatial Coding and Visual Feedback Control for Hand Movements in Allocentric and Target-Directed Tasks

    PubMed Central

    Thaler, Lore; Goodale, Melvyn A.

    2011-01-01

    Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474

  13. Sub-diffraction limit resolution in microscopy

    NASA Technical Reports Server (NTRS)

    Cheng, Ming (Inventor); Chen, Weinong (Inventor)

    2007-01-01

    A method and apparatus for visualizing sub-micron size particles employs a polarizing microscope wherein a focused beam of polarized light is projected onto a target, and a portion of the illuminating light is blocked from reaching the specimen, whereby to produce a shadow region, and projecting diffracted light from the target onto the shadow region.

  14. Learning feedback and feedforward control in a mirror-reversed visual environment.

    PubMed

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn

    2015-10-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.

  15. Learning feedback and feedforward control in a mirror-reversed visual environment

    PubMed Central

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi

    2015-01-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. PMID:26245313

  16. A unified dynamic neural field model of goal directed eye movements

    NASA Astrophysics Data System (ADS)

    Quinton, J. C.; Goffart, L.

    2018-01-01

    Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.

  17. Orientation Behavior Using Registered Topographic Maps

    DTIC Science & Technology

    2006-01-01

    integrated with the ability to reach for visual targets ( Marjanovic , Scassel- lati, & Williamson 1996). The same is true for social skills where the robot...behavior with reaching and manipula- tion tasks currently under parallel development by other members of the group ( Marjanovic et al. 1996). 8 Conclusions...in alphabet- ical order): Mike Binnard, Rod Brooks, Robert Irie, Eleni Kapogannis, Matt Marjanovic , Yoky Matsuoka, Brian Scasselatti, Nick Shectman

  18. Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task

    PubMed Central

    Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.

    2012-01-01

    We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430

  19. Similar prevalence and magnitude of auditory-evoked and visually evoked activity in the frontal eye fields: implications for multisensory motor control.

    PubMed

    Caruso, Valeria C; Pages, Daniel S; Sommer, Marc A; Groh, Jennifer M

    2016-06-01

    Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway. Copyright © 2016 the American Physiological Society.

  20. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    ERIC Educational Resources Information Center

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  1. Matching Accuracy in Hemiparetic Cerebral Palsy during Unimanual and Bimanual Movements with (Mirror) Visual Feedback

    ERIC Educational Resources Information Center

    Smorenburg, Ana R. P.; Ledebt, Annick; Deconinck, Frederik J. A.; Savelsbergh, Geert J. P.

    2012-01-01

    In the present study participants with Spastic Hemiparetic Cerebral Palsy (SHCP) were asked to match the position of a target either with the impaired arm only (unimanual condition) or with both arms at the same time (bimanual condition). The target was placed at 4 different locations scaled to the individual maximum reaching distance. To test the…

  2. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  3. Effect of travel speed on the visual control of steering toward a goal.

    PubMed

    Chen, Rongrong; Niehorster, Diederick C; Li, Li

    2018-03-01

    Previous studies have proposed that people can use visual cues such as the instantaneous direction (i.e., heading) or future path trajectory of travel specified by optic flow or target visual direction in egocentric space to steer or walk toward a goal. In the current study, we examined what visual cues people use to guide their goal-oriented locomotion and whether their reliance on such visual cues changes as travel speed increases. We presented participants with optic flow displays that simulated their self-motion toward a target at various travel speeds under two viewing conditions in which we made target egocentric direction available or unavailable for steering. We found that for both viewing conditions, participants did not steer along a curved path toward the target such that the actual and the required path curvature to reach the target would converge when approaching the target. At higher travel speeds, participants showed a faster and larger reduction in target-heading angle and more accurate and precise steady-state control of aligning their heading specified by optic flow with the target. These findings support the claim that people use heading and target egocentric direction but not path for goal-oriented locomotion control, and their reliance on heading increases at higher travel speeds. The increased reliance on heading for goal-oriented locomotion control could be due to an increased reliability in perceiving heading from optic flow as the magnitude of flow increases with travel speed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Light localization with low-contrast targets in a patient implanted with a suprachoroidal-transretinal stimulation retinal prosthesis.

    PubMed

    Endo, Takao; Fujikado, Takashi; Hirota, Masakazu; Kanda, Hiroyuki; Morimoto, Takeshi; Nishida, Kohji

    2018-04-20

    To evaluate the improvement in targeted reaching movements toward targets of various contrasts in a patient implanted with a suprachoroidal-transretinal stimulation (STS) retinal prosthesis. An STS retinal prosthesis was implanted in the right eye of a 42-year-old man with advanced Stargardt disease (visual acuity: right eye, light perception; left eye, hand motion). In localization tests during the 1-year follow-up period, the patient attempted to touch the center of a white square target (visual angle, 10°; contrast, 96, 85, or 74%) displayed at a random position on a monitor. The distance between the touched point and the center of the target (the absolute deviation) was averaged over 20 trials with the STS system on or off. With the left eye occluded, the absolute deviation was not consistently lower with the system on than off for high-contrast (96%) targets, but was consistently lower with the system on for low-contrast (74%) targets. With both eyes open, the absolute deviation was consistently lower with the system on than off for 85%-contrast targets. With the system on and 96%-contrast targets, we detected a shorter response time while covering the right eye, which was being implanted with the STS, compared to covering the left eye (2.41 ± 2.52 vs 8.45 ± 3.78 s, p < 0.01). Performance of a reaching movement improved in a patient with an STS retinal prosthesis implanted in an eye with residual natural vision. Patients with a retinal prosthesis may be able to improve their visual performance by using both artificial vision and their residual natural vision. Beginning date of the trial: Feb. 20, 2014 Date of registration: Jan. 4, 2014 Trial registration number: UMIN000012754 Registration site: UMIN Clinical Trials Registry (UMIN-CTR) http://www.umin.ac.jp/ctr/index.htm.

  5. The effect of sensory uncertainty due to amblyopia (lazy eye) on the planning and execution of visually-guided 3D reaching movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2012-01-01

    Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50-100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R(2)) which correlates the spatial position of the limb during the movement to endpoint position. Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R(2) values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis.

  6. The Effect of Sensory Uncertainty Due to Amblyopia (Lazy Eye) on the Planning and Execution of Visually-Guided 3D Reaching Movements

    PubMed Central

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C.; Chandrakumar, Manokaraananthan; Wong, Agnes M. F.

    2012-01-01

    Background Impairment of spatiotemporal visual processing in amblyopia has been studied extensively, but its effects on visuomotor tasks have rarely been examined. Here, we investigate how visual deficits in amblyopia affect motor planning and online control of visually-guided, unconstrained reaching movements. Methods Thirteen patients with mild amblyopia, 13 with severe amblyopia and 13 visually-normal participants were recruited. Participants reached and touched a visual target during binocular and monocular viewing. Motor planning was assessed by examining spatial variability of the trajectory at 50–100 ms after movement onset. Online control was assessed by examining the endpoint variability and by calculating the coefficient of determination (R2) which correlates the spatial position of the limb during the movement to endpoint position. Results Patients with amblyopia had reduced precision of the motor plan in all viewing conditions as evidenced by increased variability of the reach early in the trajectory. Endpoint precision was comparable between patients with mild amblyopia and control participants. Patients with severe amblyopia had reduced endpoint precision along azimuth and elevation during amblyopic eye viewing only, and along the depth axis in all viewing conditions. In addition, they had significantly higher R2 values at 70% of movement time along the elevation and depth axes during amblyopic eye viewing. Conclusion Sensory uncertainty due to amblyopia leads to reduced precision of the motor plan. The ability to implement online corrections depends on the severity of the visual deficit, viewing condition, and the axis of the reaching movement. Patients with mild amblyopia used online control effectively to compensate for the reduced precision of the motor plan. In contrast, patients with severe amblyopia were not able to use online control as effectively to amend the limb trajectory especially along the depth axis, which could be due to their abnormal stereopsis. PMID:22363549

  7. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    PubMed

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.

    PubMed

    Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A

    2015-05-01

    Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Visual cues and perceived reachability.

    PubMed

    Gabbard, Carl; Ammar, Diala

    2005-12-01

    A rather consistent finding in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate at midline. Explanations of such behavior have focused primarily on perceptions of postural constraints and the notion that individuals calibrate reachability in reference to multiple degrees of freedom, also known as the whole-body explanation. The present study examined the role of visual information in the form of binocular and monocular cues in perceived reachability. Right-handed participants judged the reachability of visual targets at midline with both eyes open, dominant eye occluded, and the non-dominant eye covered. Results indicated that participants were relatively accurate with condition responses not being significantly different in regard to total error. Analysis of the direction of error (mean bias) revealed effective accuracy across conditions with only a marginal distinction between monocular and binocular conditions. Therefore, within the task conditions of this experiment, it appears that binocular and monocular cues provide sufficient visual information for effective judgments of perceived reach at midline.

  10. Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations.

    PubMed

    Norris, Scott A; Hathaway, Emily N; Taylor, Jordan A; Thach, W Thomas

    2011-05-01

    Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20-30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation.

  11. Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations

    PubMed Central

    Hathaway, Emily N.; Taylor, Jordan A.; Thach, W. Thomas

    2011-01-01

    Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20–30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation. PMID:21389311

  12. Task-dependent vestibular feedback responses in reaching.

    PubMed

    Keyser, Johannes; Medendorp, W Pieter; Selen, Luc P J

    2017-07-01

    When reaching for an earth-fixed object during self-rotation, the motor system should appropriately integrate vestibular signals and sensory predictions to compensate for the intervening motion and its induced inertial forces. While it is well established that this integration occurs rapidly, it is unknown whether vestibular feedback is specifically processed dependent on the behavioral goal. Here, we studied whether vestibular signals evoke fixed responses with the aim to preserve the hand trajectory in space or are processed more flexibly, correcting trajectories only in task-relevant spatial dimensions. We used galvanic vestibular stimulation to perturb reaching movements toward a narrow or a wide target. Results show that the same vestibular stimulation led to smaller trajectory corrections to the wide than the narrow target. We interpret this reduced compensation as a task-dependent modulation of vestibular feedback responses, tuned to minimally intervene with the task-irrelevant dimension of the reach. These task-dependent vestibular feedback corrections are in accordance with a central prediction of optimal feedback control theory and mirror the sophistication seen in feedback responses to mechanical and visual perturbations of the upper limb. NEW & NOTEWORTHY Correcting limb movements for external perturbations is a hallmark of flexible sensorimotor behavior. While visual and mechanical perturbations are corrected in a task-dependent manner, it is unclear whether a vestibular perturbation, naturally arising when the body moves, is selectively processed in reach control. We show, using galvanic vestibular stimulation, that reach corrections to vestibular perturbations are task dependent, consistent with a prediction of optimal feedback control theory. Copyright © 2017 the American Physiological Society.

  13. Disruption of State Estimation in the Human Lateral Cerebellum

    PubMed Central

    Miall, R. Chris; Christensen, Lars O. D; Cain, Owen; Stanley, James

    2007-01-01

    The cerebellum has been proposed to be a crucial component in the state estimation process that combines information from motor efferent and sensory afferent signals to produce a representation of the current state of the motor system. Such a state estimate of the moving human arm would be expected to be used when the arm is rapidly and skillfully reaching to a target. We now report the effects of transcranial magnetic stimulation (TMS) over the ipsilateral cerebellum as healthy humans were made to interrupt a slow voluntary movement to rapidly reach towards a visually defined target. Errors in the initial direction and in the final finger position of this reach-to-target movement were significantly higher for cerebellar stimulation than they were in control conditions. The average directional errors in the cerebellar TMS condition were consistent with the reaching movements being planned and initiated from an estimated hand position that was 138 ms out of date. We suggest that these results demonstrate that the cerebellum is responsible for estimating the hand position over this time interval and that TMS disrupts this state estimate. PMID:18044990

  14. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  15. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    PubMed Central

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  16. From genes to brain oscillations: is the visual pathway the epigenetic clue to schizophrenia?

    PubMed

    González-Hernández, J A; Pita-Alcorta, C; Cedeño, I R

    2006-01-01

    Molecular data and gene expression data and recently mitochondrial genes and possible epigenetic regulation by non-coding genes is revolutionizing our views on schizophrenia. Genes and epigenetic mechanisms are triggered by cell-cell interaction and by external stimuli. A number of recent clinical and molecular observations indicate that epigenetic factors may be operational in the origin of the illness. Based on the molecular insights, gene expression profiles and epigenetic regulation of gene, we went back to the neurophysiology (brain oscillations) and found a putative role of the visual experiences (i.e. visual stimuli) as epigenetic factor. The functional evidences provided here, establish a direct link between the striate and extrastriate unimodal visual cortex and the neurobiology of the schizophrenia. This result support the hypothesis that 'visual experience' has a potential role as epigenetic factor and contribute to trigger and/or to maintain the progression of the schizophrenia. In this case, candidate genes sensible for the visual 'insult' may be located within the visual cortex including associative areas, while the integrity of the visual pathway before reaching the primary visual cortex is preserved. The same effect can be perceived if target genes are localised within the visual pathway, which actually, is more sensitive for 'insult' during the early life than the cortex per se. If this process affects gene expression at these sites a stably sensory specific 'insult', i.e. distorted visual information, is entering the visual system and expanded to fronto-temporo-parietal multimodal areas even from early maturation periods. The difference in the timing of postnatal neuroanatomical events between such areas and the primary visual cortex in humans (with the formers reaching the same development landmarks later in life than the latter) is 'optimal' to establish an abnormal 'cell- communication' mediated by the visual system that may further interfere with the local physiology. In this context the strategy to search target genes need to be rearrangement and redirected to visual-related genes. Otherwise, psychophysics studies combining functional neuroimage, and electrophysiology are strongly recommended, for the search of epigenetic clues that will allow to carrier gene association studies in schizophrenia.

  17. Neural mechanisms of limb position estimation in the primate brain.

    PubMed

    Shi, Ying; Buneo, Christopher A

    2011-01-01

    Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.

  18. Self-motivated visual scanning predicts flexible navigation in a virtual environment.

    PubMed

    Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C

    2014-01-01

    The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.

  19. Proprioceptive recalibration in the right and left hands following abrupt visuomotor adaptation.

    PubMed

    Salomonczyk, Danielle; Henriques, Denise Y P; Cressman, Erin K

    2012-03-01

    Previous studies have demonstrated that after reaching with misaligned visual feedback of the hand, one adapts his or her reaches and partially recalibrates proprioception, such that sense of felt hand position is shifted to match the seen hand position. However, to date, this has only been demonstrated in the right (dominant) hand following reach training with a visuomotor distortion in which the rotated cursor distortion was introduced gradually. As reach adaptation has been shown to differ depending on how the distortion is introduced (gradual vs. abrupt), we sought to examine proprioceptive recalibration following reach training with a cursor that was abruptly rotated 30° clockwise relative to hand motion. Furthermore, because the left and right arms have demonstrated selective advantages when matching visual and proprioceptive targets, respectively, we assessed proprioceptive recalibration in right-handed subjects following training with either the right or the left hand. On average, we observed shifts in felt hand position of approximately 7.6° following training with misaligned visual feedback of the hand, which is consistent with our previous findings in which the distortion was introduced gradually. Moreover, no difference was observed in proprioceptive recalibration across the left and right hands. These findings suggest that proprioceptive recalibration is a robust process that arises symmetrically in the two hands following visuomotor adaptation regardless of the initial magnitude of the error signal.

  20. [Image fusion: use in the control of the distribution of prostatic biopsies].

    PubMed

    Mozer, Pierre; Baumann, Michaël; Chevreau, Grégoire; Troccaz, Jocelyne

    2008-02-01

    Prostate biopsies are performed under 2D TransRectal UltraSound (US) guidance by sampling the prostate according to a predefined pattern. Modern image processing tools allow better control of biopsy distribution. We evaluated the accuracy of a single operator performing a pattern of 12 ultrasound-guided biopsies by registering 3D ultrasound control images acquired after each biopsy. For each patient, prostate image alignment was performed automatically with a voxel-based registration algorithm allowing visualization of each biopsy trajectory in a single ultrasound reference volume. On average, the operator reached the target in 60% of all cases. This study shows that it is difficult to accurately reach targets in the prostate using 2D ultrasound. In the near future, real-time fusion of MRI and US images will allow selection of a target in previously acquired MR images and biopsy of this target by US guidance.

  1. Functional anatomy of nonvisual feedback loops during reaching: a positron emission tomography study.

    PubMed

    Desmurget, M; Gréa, H; Grethe, J S; Prablanc, C; Alexander, G E; Grafton, S T

    2001-04-15

    Reaching movements performed without vision of the moving limb are continuously monitored, during their execution, by feedback loops (designated nonvisual). In this study, we investigated the functional anatomy of these nonvisual loops using positron emission tomography (PET). Seven subjects had to "look at" (eye) or "look and point to" (eye-arm) visual targets whose location either remained stationary or changed undetectably during the ocular saccade (when vision is suppressed). Slightly changing the target location during gaze shift causes an increase in the amount of correction to be generated. Functional anatomy of nonvisual feedback loops was identified by comparing the reaching condition involving large corrections (jump) with the reaching condition involving small corrections (stationary), after subtracting the activations associated with saccadic movements and hand movement planning [(eye-arm-jumping minus eye-jumping) minus (eye-arm-stationary minus eye-stationary)]. Behavioral data confirmed that the subjects were both accurate at reaching to the stationary targets and able to update their movement smoothly and early in response to the target jump. PET difference images showed that these corrections were mediated by a restricted network involving the left posterior parietal cortex, the right anterior intermediate cerebellum, and the left primary motor cortex. These results are consistent with our knowledge of the functional properties of these areas and more generally with models emphasizing parietal-cerebellar circuits for processing a dynamic motor error signal.

  2. Effects of continuous visual feedback during sitting balance training in chronic stroke survivors.

    PubMed

    Pellegrino, Laura; Giannoni, Psiche; Marinelli, Lucio; Casadio, Maura

    2017-10-16

    Postural control deficits are common in stroke survivors and often the rehabilitation programs include balance training based on visual feedback to improve the control of body position or of the voluntary shift of body weight in space. In the present work, a group of chronic stroke survivors, while sitting on a force plate, exercised the ability to control their Center of Pressure with a training based on continuous visual feedback. The goal of this study was to test if and to what extent chronic stroke survivors were able to learn the task and transfer the learned ability to a condition without visual feedback and to directions and displacement amplitudes different from those experienced during training. Eleven chronic stroke survivors (5 Male - 6 Female, age: 59.72 ± 12.84 years) participated in this study. Subjects were seated on a stool positioned on top of a custom-built force platform. Their Center of Pressure positions were mapped to the coordinate of a cursor on a computer monitor. During training, the cursor position was always displayed and the subjects were to reach targets by shifting their Center of Pressure by moving their trunk. Pre and post-training subjects were required to reach without visual feedback of the cursor the training targets as well as other targets positioned in different directions and displacement amplitudes. During training, most stroke survivors were able to perform the required task and to improve their performance in terms of duration, smoothness, and movement extent, although not in terms of movement direction. However, when we removed the visual feedback, most of them had no improvement with respect to their pre-training performance. This study suggests that postural training based exclusively on continuous visual feedback can provide limited benefits for stroke survivors, if administered alone. However, the positive gains observed during training justify the integration of this technology-based protocol in a well-structured and personalized physiotherapy training, where the combination of the two approaches may lead to functional recovery.

  3. Active Guidance of a Handheld Micromanipulator using Visual Servoing.

    PubMed

    Becker, Brian C; Voros, Sandrine; Maclachlan, Robert A; Hager, Gregory D; Riviere, Cameron N

    2009-05-12

    In microsurgery, a surgeon often deals with anatomical structures of sizes that are close to the limit of the human hand accuracy. Robotic assistants can help to push beyond the current state of practice by integrating imaging and robot-assisted tools. This paper demonstrates control of a handheld tremor reduction micromanipulator with visual servo techniques, aiding the operator by providing three behaviors: snap-to, motion-scaling, and standoff-regulation. A stereo camera setup viewing the workspace under high magnification tracks the tip of the micromanipulator and the desired target object being manipulated. Individual behaviors activate in task-specific situations when the micromanipulator tip is in the vicinity of the target. We show that the snap-to behavior can reach and maintain a position at a target with an accuracy of 17.5 ± 0.4μm Root Mean Squared Error (RMSE) distance between the tip and target. Scaling the operator's motions and preventing unwanted contact with non-target objects also provides a larger margin of safety.

  4. Looking and touching: What extant approaches reveal about the structure of early word knowledge

    PubMed Central

    Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2014-01-01

    The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711

  5. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  6. Anti-pointing is mediated by a perceptual bias of target location in left and right visual space.

    PubMed

    Heath, Matthew; Maraj, Anika; Gradkowski, Ashlee; Binsted, Gordon

    2009-01-01

    We sought to determine whether mirror-symmetrical limb movements (so-called anti-pointing) elicit a pattern of endpoint bias commensurate with perceptual judgments. In particular, we examined whether asymmetries related to the perceptual over- and under-estimation of target extent in respective left and right visual space impacts the trajectories of anti-pointing. In Experiment 1, participants completed direct (i.e. pro-pointing) and mirror-symmetrical (i.e. anti-pointing) responses to targets in left and right visual space with their right hand. In line with the anti-saccade literature, anti-pointing yielded longer reaction times than pro-pointing: a result suggesting increased top-down processing for the sensorimotor transformations underlying a mirror-symmetrical response. Most interestingly, pro-pointing yielded comparable endpoint accuracy in left and right visual space; however, anti-pointing produced an under- and overshooting bias in respective left and right visual space. In Experiment 2, we replicated the findings from Experiment 1 and further demonstrate that the endpoint bias of anti-pointing is independent of the reaching limb (i.e. left vs. right hand) and between-task differences in saccadic drive. We thus propose that the visual field-specific endpoint bias observed here is related to the cognitive (i.e. top-down) nature of anti-pointing and the corollary use of visuo-perceptual networks to support the sensorimotor transformations underlying such actions.

  7. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a “natural” grasping task induces pantomime-like grasps

    PubMed Central

    Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.

    2015-01-01

    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834

  8. Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a "natural" grasping task induces pantomime-like grasps.

    PubMed

    Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A

    2015-01-01

    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.

  9. Online Control of Prehension Predicts Performance on a Standardized Motor Assessment Test in 8- to 12-Year-Old Children

    PubMed Central

    Blanchard, Caroline C. V.; McGlashan, Hannah L.; French, Blandine; Sperring, Rachel J.; Petrocochino, Bianca; Holmes, Nicholas P.

    2017-01-01

    Goal-directed hand movements are guided by sensory information and may be adjusted ‘online,’ during the movement. If the target of a movement unexpectedly changes position, trajectory corrections can be initiated in as little as 100 ms in adults. This rapid visual online control is impaired in children with developmental coordination disorder (DCD), and potentially in other neurodevelopmental conditions. We investigated the visual control of hand movements in children in a ‘center-out’ double-step reaching and grasping task, and examined how parameters of this visuomotor control co-vary with performance on standardized motor tests often used with typically and atypically developing children. Two groups of children aged 8–12 years were asked to reach and grasp an illuminated central ball on a vertically oriented board. On a proportion of trials, and at movement onset, the illumination switched unpredictably to one of four other balls in a center-out configuration (left, right, up, or down). When the target moved, all but one of the children were able to correct their movements before reaching the initial target, at least on some trials, but the latencies to initiate these corrections were longer than those typically reported in the adult literature, ranging from 211 to 581 ms. These later corrections may be due to less developed motor skills in children, or to the increased cognitive and biomechanical complexity of switching movements in four directions. In the first group (n = 187), reaching and grasping parameters significantly predicted standardized movement scores on the MABC-2, most strongly for the aiming and catching component. In the second group (n = 85), these same parameters did not significantly predict scores on the DCDQ′07 parent questionnaire. Our reaching and grasping task provides a sensitive and continuous measure of movement skill that predicts scores on standardized movement tasks used to screen for DCD. PMID:28360874

  10. Learning visuomotor transformations for gaze-control and grasping.

    PubMed

    Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf

    2005-08-01

    For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.

  11. Frazzled promotes growth cone attachment at the source of a Netrin gradient in the Drosophila visual system

    PubMed Central

    Akin, Orkun; Zipursky, S Lawrence

    2016-01-01

    Axon guidance is proposed to act through a combination of long- and short-range attractive and repulsive cues. The ligand-receptor pair, Netrin (Net) and Frazzled (Fra) (DCC, Deleted in Colorectal Cancer, in vertebrates), is recognized as the prototypical effector of chemoattraction, with roles in both long- and short-range guidance. In the Drosophila visual system, R8 photoreceptor growth cones were shown to require Net-Fra to reach their target, the peak of a Net gradient. Using live imaging, we show, however, that R8 growth cones reach and recognize their target without Net, Fra, or Trim9, a conserved binding partner of Fra, but do not remain attached to it. Thus, despite the graded ligand distribution along the guidance path, Net-Fra is not used for chemoattraction. Based on findings in other systems, we propose that adhesion to substrate-bound Net underlies both long- and short-range Net-Fra-dependent guidance in vivo, thereby eroding the distinction between them. DOI: http://dx.doi.org/10.7554/eLife.20762.001 PMID:27743477

  12. Visual error augmentation enhances learning in three dimensions.

    PubMed

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  13. Eye movements and the span of the effective stimulus in visual search.

    PubMed

    Bertera, J H; Rayner, K

    2000-04-01

    The span of the effective stimulus during visual search through an unstructured alphanumeric array was investigated by using eye-contingent-display changes while the subjects searched for a target letter. In one condition, a window exposing the search array moved in synchrony with the subjects' eye movements, and the size of the window was varied. Performance reached asymptotic levels when the window was 5 degrees. In another condition, a foveal mask moved in synchrony with each eye movement, and the size of the mask was varied. The foveal mask conditions were much more detrimental to search behavior than the window conditions, indicating the importance of foveal vision during search. The size of the array also influenced performance, but performance reached asymptote for all array sizes tested at the same window size, and the effect of the foveal mask was the same for all array sizes. The results indicate that both acuity and difficulty of the search task influenced the span of the effective stimulus during visual search.

  14. Perceived reachability in single- and multiple-degree-of-freedom workspaces.

    PubMed

    Gabbard, Carl; Ammar, Diala; Lee, Sunghan

    2006-11-01

    In comparisons of perceived (imagined) and actual reaches, investigators consistently find a tendency to overestimate. A primary explanation for that phenomenon is that individuals reach as a "whole-body engagement" involving multiple degrees of freedom (m-df). The authors examined right-handers (N = 28) in 1-df and m-df workspaces by having them judge the reachability of targets at midline, right, and left visual fields. Response profiles were similar for total error. Both conditions reflected an overestimation bias, although the bias was significantly greater in the m-df condition. Midline responses differed (greater overestimation) from those of right and left visual fields, which were similar. Although the authors would have predicted better performance in the m-df condition, it seems plausible that if individuals think in terms of m-df, they may feel more confident in that condition and thereby exhibit greater overestimation. Furthermore, the authors speculate that the reduced bias at the side fields may be attributed to a more conservative strategy based in part on perceived reach constraints.

  15. Prism adaptation speeds reach initiation in the direction of the prism after-effect.

    PubMed

    Striemer, Christopher L; Borza, Carley A

    2017-10-01

    Damage to the temporal-parietal cortex in the right hemisphere often leads to spatial neglect-a disorder in which patients are unable to attend to sensory input from their contralesional (left) side. Neglect has been associated with both attentional and premotor deficits. That is, in addition to having difficulty with attending to the left side, patients are often slower to initiate leftward vs. rightward movements (i.e., directional hypokinesia). Previous research has indicated that a brief period of adaptation to rightward shifting prisms can reduce symptoms of neglect by adjusting the patient's movements leftward, toward the neglected field. Although prism adaptation has been shown to reduce spatial attention deficits in patients with neglect, very little work has examined the effects of prisms on premotor symptoms. In the current study, we examined this in healthy individuals using leftward shifting prisms to induce a rightward shift in the egocentric reference frame, similar to neglect patients prior to prism adaptation. Specifically, we examined the speed with which healthy participants initiated leftward and rightward reaches (without visual feedback) prior to and following adaptation to either 17° leftward (n = 16) or 17° rightward (n = 15) shifting prisms. Our results indicated that, following adaptation, participants were significantly faster to initiate reaches towards targets located in the direction opposite the prism shift. That is, participants were faster to initiate reaches to right targets following leftward prism adaptation and were faster to initiate reaches to left targets following rightward prism adaptation. Overall, these results are consistent with the idea that prism adaptation can influence the speed with which a reach can be initiated toward a target in the direction opposite the prism shift, possibly through altering activity in neural circuits involved in reach planning.

  16. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  17. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  18. Task-induced Changes in Idiopathic Infantile Nystagmus Vary with Gaze.

    PubMed

    Salehi Fadardi, Marzieh; Bathke, Arne C; Harrar, Solomon W; Abel, Larry Allen

    2017-05-01

    Investigations of infantile nystagmus syndrome (INS) at center or at the null position have reported that INS worsens when visual demand is combined with internal states, e.g. stress. Visual function and INS parameters such as foveation time, frequency, amplitude, and intensity can also be influenced by gaze position. We hypothesized that increases from baseline in visual demand and mental load would affect INS parameters at the null position differently than at other gaze positions. Eleven participants with idiopathic INS were asked to determine the direction of Tumbling-E targets, whose visual demand was varied through changes in size and contrast, using a staircase procedure. Targets appeared between ±25° in 5° steps. The task was repeated with both mental arithmetic and time restriction to impose higher mental load, confirmed through subjective ratings and concurrent physiological measurements. Within-subject comparisons were limited to the null and 15° away from it. No significant main effects of task on any INS parameters were found. At both locations, high mental load worsened task performance metrics, i.e. lowest contrast (P = .001) and smallest optotype size reached (P = .012). There was a significant interaction between mental load and gaze position for foveation time (P = .02) and for the smallest optotype reached (P = .028). The increase in threshold optotype size from the low to high mental load was greater at the null than away from it. During high visual demand, foveation time significantly decreased from baseline at the null as compared to away from it (mean difference ± SE: 14.19 ± 0.7 msec; P = .010). Under high visual demand, the effects of increased mental load on foveation time and visual task performance differed at the null as compared to 15° away from it. Assessment of these effects could be valuable when evaluating INS clinically and when considering its impact on patients' daily activities.

  19. Neural correlates of learning and trajectory planning in the posterior parietal cortex

    PubMed Central

    Torres, Elizabeth B.; Quian Quiroga, Rodrigo; Cui, He; Buneo, Christopher A.

    2013-01-01

    The posterior parietal cortex (PPC) is thought to play an important role in the planning of visually-guided reaching movements. However, the relative roles of the various subdivisions of the PPC in this function are still poorly understood. For example, studies of dorsal area 5 point to a representation of reaches in both extrinsic (endpoint) and intrinsic (joint or muscle) coordinates, as evidenced by partial changes in preferred directions and positional discharge with changes in arm posture. In contrast, recent findings suggest that the adjacent medial intraparietal area (MIP) is involved in more abstract representations, e.g., encoding reach target in visual coordinates. Such a representation is suitable for planning reach trajectories involving shortest distance paths to targets straight ahead. However, it is currently unclear how MIP contributes to the planning of other types of trajectories, including those with various degrees of curvature. Such curved trajectories recruit different joint excursions and might help us address whether their representation in the PPC is purely in extrinsic coordinates or in intrinsic ones as well. Here we investigated the role of the PPC in these processes during an obstacle avoidance task for which the animals had not been explicitly trained. We found that PPC planning activity was predictive of both the spatial and temporal aspects of upcoming trajectories. The same PPC neurons predicted the upcoming trajectory in both endpoint and joint coordinates. The predictive power of these neurons remained stable and accurate despite concomitant motor learning across task conditions. These findings suggest the role of the PPC can be extended from specifying abstract movement goals to expressing these plans as corresponding trajectories in both endpoint and joint coordinates. Thus, the PPC appears to contribute to reach planning and approach-avoidance arm motions at multiple levels of representation. PMID:23730275

  20. When Kinesthesia Becomes Visual: A Theoretical Justification for Executing Motor Tasks in Visual Space

    PubMed Central

    Tagliabue, Michele; McIntyre, Joseph

    2013-01-01

    Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations. PMID:23861903

  1. The coordination patterns observed when two hands reach-to-grasp separate objects.

    PubMed

    Bingham, Geoffrey P; Hughes, Kirstie; Mon-Williams, Mark

    2008-01-01

    What determines coordination patterns when both hands reach to grasp separate objects at the same time? It is known that synchronous timing is preferred as the most stable mode of bimanual coordination. Nonetheless, normal unimanual prehension behaviour predicts asynchrony when the two hands reach towards unequal targets, with synchrony restricted to targets equal in size and distance. Additionally, sufficiently separated targets require sequential looking. Does synchrony occur in all cases because it is preferred in bimanual coordination or does asynchrony occur because of unimanual task constraints and the need for sequential looking? We investigated coordinative timing when participants (n = 8) moved their right (preferred) hand to the same object at a fixed distance but the left hand to objects of different width (3, 5, and 7 cm) and grip surface size (1, 2, and 3 cm) placed at different distances (20, 30, and 40 cm) over 270 randomised trials. The hand movements consisted of two components: (1) an initial component (IC) during which the hand reached towards the target while forming an appropriate grip aperture, stopping at (but not touching) the object; (2) a completion component (CC) during which the finger and thumb closed on the target. The two limbs started the IC together but did not interact until the deceleration phase when evidence of synchronisation began to appear. Nonetheless, asynchronous timing was present at the end of the IC and preserved through the CC even with equidistant targets. Thus, there was synchrony but requirements for visual information ultimately yielded asynchronous coordinative timing.

  2. Rescuing Stimuli from Invisibility: Inducing a Momentary Release from Visual Masking with Pre-Target Entrainment

    ERIC Educational Resources Information Center

    Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele; Beck, Diane M.; Lleras, Alejandro

    2010-01-01

    At near-threshold levels of stimulation, identical stimulus parameters can result in very different phenomenal experiences. Can we manipulate which stimuli reach consciousness? Here we show that consciousness of otherwise masked stimuli can be experimentally induced by sensory entrainment. We preceded a backward-masked stimulus with a series of…

  3. The effect of response-delay on estimating reachability.

    PubMed

    Gabbard, Carl; Ammar, Diala

    2008-11-01

    The experiment was conducted to compare visual imagery (VI) and motor imagery (MI) reaching tasks in a response-delay paradigm designed to explore the hypothesized dissociation between vision for perception and vision for action. Although the visual systems work cooperatively in motor control, theory suggests that they operate under different temporal constraints. From this perspective, we expected that delay would affect MI but not VI because MI operates in real time and VI is postulated to be memory-driven. Following measurement of actual reach, right-handers were presented seven (imagery) targets at midline in eight conditions: MI and VI with 0-, 1-, 2-, and 4-s delays. Results indicted that delay affected the ability to estimate reachability with MI but not with VI. These results are supportive of a general distinction between vision for perception and vision for action.

  4. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations.

    PubMed

    Freud, Erez; Macdonald, Scott N; Chen, Juan; Quinlan, Derek J; Goodale, Melvyn A; Culham, Jody C

    2018-01-01

    In the current era of touchscreen technology, humans commonly execute visually guided actions directed to two-dimensional (2D) images of objects. Although real, three-dimensional (3D), objects and images of the same objects share high degree of visual similarity, they differ fundamentally in the actions that can be performed on them. Indeed, previous behavioral studies have suggested that simulated grasping of images relies on different representations than actual grasping of real 3D objects. Yet the neural underpinnings of this phenomena have not been investigated. Here we used functional magnetic resonance imaging (fMRI) to investigate how brain activation patterns differed for grasping and reaching actions directed toward real 3D objects compared to images. Multivoxel Pattern Analysis (MVPA) revealed that the left anterior intraparietal sulcus (aIPS), a key region for visually guided grasping, discriminates between both the format in which objects were presented (real/image) and the motor task performed on them (grasping/reaching). Interestingly, during action planning, the representations of real 3D objects versus images differed more for grasping movements than reaching movements, likely because grasping real 3D objects involves fine-grained planning and anticipation of the consequences of a real interaction. Importantly, this dissociation was evident in the planning phase, before movement initiation, and was not found in any other regions, including motor and somatosensory cortices. This suggests that the dissociable representations in the left aIPS were not based on haptic, motor or proprioceptive feedback. Together, these findings provide novel evidence that actions, particularly grasping, are affected by the realness of the target objects during planning, perhaps because real targets require a more elaborate forward model based on visual cues to predict the consequences of real manipulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  6. Looking and touching: what extant approaches reveal about the structure of early word knowledge.

    PubMed

    Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2015-09-01

    The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.

  7. Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2018-04-25

    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.

  8. Optoporation of impermeable molecules and genes for visualization and activation of cells

    NASA Astrophysics Data System (ADS)

    Dhakal, Kamal; Batbyal, Subrata; Kim, Young-Tae; Mohanty, Samarendra

    2015-03-01

    Visualization, activation, and detection of the cell(s) and their electrical activity require delivery of exogenous impermeable molecules and targeted expression of genes encoding labeling proteins, ion-channels and voltage indicators. While genes can be delivered by viral vector to cells, delivery of other impermeable molecules into the cytoplasm of targeted cells requires microinjection by mechanical needle or microelectrodes, which pose significant challenge to the viability of the cells. Further, it will be useful to localize the expression of the targeted molecules not only in specific cell types, but to specific cells in restricted spatial regions. Here, we report use of focused near-infrared (NIR) femtosecond laser beam to transiently perforate targeted cell membrane to insert genes encoding blue light activatable channelrhodopsin-2 (ChR2) and red-shifted opsin (ReachR). Optoporation of nanomolar concentrations of rhodamine phalloidin (an impermeable dye molecule for staining filamentous actin) into targeted living mammalian cells (both HEK and primary cortical neurons) is also achieved allowing imaging of dynamics and intact morphology of cellular structures without requiring fixation.

  9. Effects of Reduced Acuity and Stereo Acuity on Saccades and Reaching Movements in Adults With Amblyopia and Strabismus.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Colpa, Linda; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2017-02-01

    Our previous work has shown that amblyopia disrupts the planning and execution of visually-guided saccadic and reaching movements. We investigated the association between the clinical features of amblyopia and aspects of visuomotor behavior that are disrupted by amblyopia. A total of 55 adults with amblyopia (22 anisometropic, 18 strabismic, 15 mixed mechanism), 14 adults with strabismus without amblyopia, and 22 visually-normal control participants completed a visuomotor task while their eye and hand movements were recorded. Univariate and multivariate analyses were performed to assess the association between three clinical predictors of amblyopia (amblyopic eye [AE] acuity, stereo sensitivity, and eye deviation) and seven kinematic outcomes, including saccadic and reach latency, interocular saccadic and reach latency difference, saccadic and reach precision, and PA/We ratio (an index of reach control strategy efficacy using online feedback correction). Amblyopic eye acuity explained 28% of the variance in saccadic latency, and 48% of the variance in mean saccadic latency difference between the amblyopic and fellow eyes (i.e., interocular latency difference). In contrast, for reach latency, AE acuity explained only 10% of the variance. Amblyopic eye acuity was associated with reduced endpoint saccadic (23% of variance) and reach (22% of variance) precision in the amblyopic group. In the strabismus without amblyopia group, stereo sensitivity and eye deviation did not explain any significant variance in saccadic and reach latency or precision. Stereo sensitivity was the best clinical predictor of deficits in reach control strategy, explaining 23% of total variance of PA/We ratio in the amblyopic group and 12% of variance in the strabismus without amblyopia group when viewing with the amblyopic/nondominant eye. Deficits in eye and limb movement initiation (latency) and target localization (precision) were associated with amblyopic acuity deficit, whereas changes in the sensorimotor reach strategy were associated with deficits in stereopsis. Importantly, more than 50% of variance was not explained by the measured clinical features. Our findings suggest that other factors, including higher order visual processing and attention, may have an important role in explaining the kinematic deficits observed in amblyopia.

  10. The role of shared visual information for joint action coordination.

    PubMed

    Vesper, Cordula; Schmitz, Laura; Safra, Lou; Sebanz, Natalie; Knoblich, Günther

    2016-08-01

    Previous research has identified a number of coordination processes that enable people to perform joint actions. But what determines which coordination processes joint action partners rely on in a given situation? The present study tested whether varying the shared visual information available to co-actors can trigger a shift in coordination processes. Pairs of participants performed a movement task that required them to synchronously arrive at a target from separate starting locations. When participants in a pair received only auditory feedback about the time their partner reached the target they held their movement duration constant to facilitate coordination. When they received additional visual information about each other's movements they switched to a fundamentally different coordination process, exaggerating the curvature of their movements to communicate their arrival time. These findings indicate that the availability of shared perceptual information is a major factor in determining how individuals coordinate their actions to obtain joint outcomes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Examining the effect of state anxiety on compensatory and strategic adjustments in the planning of goal-directed aiming.

    PubMed

    Roberts, James W; Wilson, Mark R; Skultety, Jessica K; Lyons, James L

    2018-04-01

    The anxiety-perceptual-motor performance relationship may be enriched by investigations involving discrete manual responses due to the definitive demarcation of planning and control processes, which comprise the early and late portions of movement, respectively. To further examine the explanatory power of self-focus and distraction theories, we explored the potential of anxiety causing changes to movement planning that accommodate for anticipated negative effects in online control. As a result, we posed two hypotheses where anxiety causes performers to initially undershoot the target and enable more time to use visual feedback ("play-it-safe"), or fire a ballistic reach to cover a greater distance without later undertaking online control ("go-for-it"). Participants were tasked with an upper-limb movement to a single target under counter-balanced instructions to execute fast and accurate responses (low/normal anxiety) with non-contingent negative performance feedback (high anxiety). The results indicated that the previously identified negative impact of anxiety in online control was replicated. While anxiety caused a longer displacement to reach peak velocity and greater tendency to overshoot the target, there appeared to be no shift in the attempts to utilise online visual feedback. Thus, the tendency to initially overshoot may manifest from an inefficient auxiliary procedure that manages to uphold overall movement time and response accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Coordinated turn-and-reach movements. I. Anticipatory compensation for self-generated coriolis and interaction torques

    NASA Technical Reports Server (NTRS)

    Pigeon, Pascale; Bortolami, Simone B.; DiZio, Paul; Lackner, James R.

    2003-01-01

    When reaching movements involve simultaneous trunk rotation, additional interaction torques are generated on the arm that are absent when the trunk is stable. To explore whether the CNS compensates for such self-generated interaction torques, we recorded hand trajectories in reaching tasks involving various amplitudes and velocities of arm extension and trunk rotation. Subjects pointed to three targets on a surface slightly above waist level. Two of the target locations were chosen so that a similar arm configuration relative to the trunk would be required for reaching to them, one of these targets requiring substantial trunk rotation, the other very little. Significant trunk rotation was necessary to reach the third target, but the arm's radial distance to the body remained virtually unchanged. Subjects reached at two speeds-a natural pace (slow) and rapidly (fast)-under normal lighting and in total darkness. Trunk angular velocity and finger velocity relative to the trunk were higher in the fast conditions but were not affected by the presence or absence of vision. Peak trunk velocity increased with increasing trunk rotation up to a maximum of 200 degrees /s. In slow movements, peak finger velocity relative to the trunk was smaller when trunk rotation was necessary to reach the targets. In fast movements, peak finger velocity was approximately 1.7 m/s for all targets. Finger trajectories were more curved when reaching movements involved substantial trunk rotation; however, the terminal errors and the maximal deviation of the trajectory from a straight line were comparable in slow and fast movements. This pattern indicates that the larger Coriolis, centripetal, and inertial interaction torques generated during rapid reaches were compensated by additional joint torques. Trajectory characteristics did not vary with the presence or absence of vision, indicating that visual feedback was unnecessary for anticipatory compensations. In all reaches involving trunk rotation, the finger movement generally occurred entirely during the trunk movement, indicating that the CNS did not minimize Coriolis forces incumbent on trunk rotation by sequencing the arm and trunk motions into a turn followed by a reach. A simplified model of the arm/trunk system revealed that additional interaction torques generated on the arm during voluntary turning and reaching were equivalent to < or =1.8 g (1 g = 9.81 m/s(2)) of external force at the elbow but did not degrade performance. In slow-rotation room studies involving reaching movements during passive rotation, Coriolis forces as small as 0.2 g greatly deflect movement trajectories and endpoints. We conclude that compensatory motor innervations are engaged in a predictive fashion to counteract impending self-generated interaction torques during voluntary reaching movements.

  13. Coordinated turn-and-reach movements. I. Anticipatory compensation for self-generated coriolis and interaction torques.

    PubMed

    Pigeon, Pascale; Bortolami, Simone B; DiZio, Paul; Lackner, James R

    2003-01-01

    When reaching movements involve simultaneous trunk rotation, additional interaction torques are generated on the arm that are absent when the trunk is stable. To explore whether the CNS compensates for such self-generated interaction torques, we recorded hand trajectories in reaching tasks involving various amplitudes and velocities of arm extension and trunk rotation. Subjects pointed to three targets on a surface slightly above waist level. Two of the target locations were chosen so that a similar arm configuration relative to the trunk would be required for reaching to them, one of these targets requiring substantial trunk rotation, the other very little. Significant trunk rotation was necessary to reach the third target, but the arm's radial distance to the body remained virtually unchanged. Subjects reached at two speeds-a natural pace (slow) and rapidly (fast)-under normal lighting and in total darkness. Trunk angular velocity and finger velocity relative to the trunk were higher in the fast conditions but were not affected by the presence or absence of vision. Peak trunk velocity increased with increasing trunk rotation up to a maximum of 200 degrees /s. In slow movements, peak finger velocity relative to the trunk was smaller when trunk rotation was necessary to reach the targets. In fast movements, peak finger velocity was approximately 1.7 m/s for all targets. Finger trajectories were more curved when reaching movements involved substantial trunk rotation; however, the terminal errors and the maximal deviation of the trajectory from a straight line were comparable in slow and fast movements. This pattern indicates that the larger Coriolis, centripetal, and inertial interaction torques generated during rapid reaches were compensated by additional joint torques. Trajectory characteristics did not vary with the presence or absence of vision, indicating that visual feedback was unnecessary for anticipatory compensations. In all reaches involving trunk rotation, the finger movement generally occurred entirely during the trunk movement, indicating that the CNS did not minimize Coriolis forces incumbent on trunk rotation by sequencing the arm and trunk motions into a turn followed by a reach. A simplified model of the arm/trunk system revealed that additional interaction torques generated on the arm during voluntary turning and reaching were equivalent to < or =1.8 g (1 g = 9.81 m/s(2)) of external force at the elbow but did not degrade performance. In slow-rotation room studies involving reaching movements during passive rotation, Coriolis forces as small as 0.2 g greatly deflect movement trajectories and endpoints. We conclude that compensatory motor innervations are engaged in a predictive fashion to counteract impending self-generated interaction torques during voluntary reaching movements.

  14. The Effects of Optical Illusions in Perception and Action in Peripersonal and Extrapersonal Space.

    PubMed

    Shim, Jaeho; van der Kamp, John

    2017-09-01

    While the two visual system hypothesis tells a fairly compelling story about perception and action in peripersonal space (i.e., within arm's reach), its validity for extrapersonal space is very limited and highly controversial. Hence, the present purpose was to assess whether perception and action differences in peripersonal space hold in extrapersonal space and are modulated by the same factors. To this end, the effects of an optic illusion in perception and action in both peripersonal and extrapersonal space were compared in three groups that threw balls toward a target at a distance under different target eccentricity (i.e., with the target fixated and in peripheral field), viewing (i.e., binocular and monocular viewing), and delay conditions (i.e., immediate and delayed action). The illusory bias was smaller in action than in perception in peripersonal space, but this difference was significantly reduced in extrapersonal space, primarily because of a weakening bias in perception. No systematic modulation of target eccentricity, viewing, and delay arose. The findings suggest that the two visual system hypothesis is also valid for extra personal space.

  15. Reducing Trunk Compensation in Stroke Survivors: A Randomized Crossover Trial Comparing Visual and Force Feedback Modalities.

    PubMed

    Valdés, Bulmaro Adolfo; Schneider, Andrea Nicole; Van der Loos, H F Machiel

    2017-10-01

    To investigate whether the compensatory trunk movements of stroke survivors observed during reaching tasks can be decreased by force and visual feedback, and to examine whether one of these feedback modalities is more efficacious than the other in reducing this compensatory tendency. Randomized crossover trial. University research laboratory. Community-dwelling older adults (N=15; 5 women; mean age, 64±11y) with hemiplegia from nontraumatic hemorrhagic or ischemic stroke (>3mo poststroke), recruited from stroke recovery groups, the research group's website, and the community. In a single session, participants received augmented feedback about their trunk compensation during a bimanual reaching task. Visual feedback (60 trials) was delivered through a computer monitor, and force feedback (60 trials) was delivered through 2 robotic devices. Primary outcome measure included change in anterior trunk displacement measured by motion tracking camera. Secondary outcomes included trunk rotation, index of curvature (measure of straightness of hands' path toward target), root mean square error of hands' movement (differences between hand position on every iteration of the program), completion time for each trial, and posttest questionnaire to evaluate users' experience and system's usability. Both visual (-45.6% [45.8 SD] change from baseline, P=.004) and force (-41.1% [46.1 SD], P=.004) feedback were effective in reducing trunk compensation. Scores on secondary outcome measures did not improve with either feedback modality. Neither feedback condition was superior. Visual and force feedback show promise as 2 modalities that could be used to decrease trunk compensation in stroke survivors during reaching tasks. It remains to be established which one of these 2 feedback modalities is more efficacious than the other as a cue to reduce compensatory trunk movement. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  16. The use of peripheral vision to guide perturbation-evoked reach-to-grasp balance-recovery reactions

    PubMed Central

    King, Emily C.; McKay, Sandra M.; Cheng, Kenneth C.

    2016-01-01

    For a reach-to-grasp reaction to prevent a fall, it must be executed very rapidly, but with sufficient accuracy to achieve a functional grip. Recent findings suggest that the CNS may avoid potential time delays associated with saccade-guided arm movements by instead relying on peripheral vision (PV). However, studies of volitional arm movements have shown that reaching is slower and/or less accurate when guided by PV, rather than central vision (CV). The present study investigated how the CNS resolves speed-accuracy trade-offs when forced to use PV to guide perturbation-evoked reach-to-grasp balance-recovery reactions. These reactions were evoked, in 12 healthy young adults, via sudden unpredictable anteroposterior platform translation (barriers deterred stepping reactions). In PV trials, subjects were required to look straight-ahead at a visual target while a small cylindrical handhold (length 25%> hand-width) moved intermittently and unpredictably along a transverse axis before stopping at a visual angle of 20°, 30°, or 40°. The perturbation was then delivered after a random delay. In CV trials, subjects fixated on the handhold throughout the trial. A concurrent visuo-cognitive task was performed in 50% of PV trials but had little impact on reach-to-grasp timing or accuracy. Forced reliance on PV did not significantly affect response initiation times, but did lead to longer movement times, longer time-after-peak-velocity and less direct trajectories (compared to CV trials) at the larger visual angles. Despite these effects, forced reliance on PV did not compromise ability to achieve a functional grasp and recover equilibrium, for the moderately large perturbations and healthy young adults tested in this initial study. PMID:20957351

  17. Motor selection dynamics in FEF explain the reaction time variance of saccades to single targets

    PubMed Central

    Hauser, Christopher K; Zhu, Dantong; Stanford, Terrence R

    2018-01-01

    In studies of voluntary movement, a most elemental quantity is the reaction time (RT) between the onset of a visual stimulus and a saccade toward it. However, this RT demonstrates extremely high variability which, in spite of extensive research, remains unexplained. It is well established that, when a visual target appears, oculomotor activity gradually builds up until a critical level is reached, at which point a saccade is triggered. Here, based on computational work and single-neuron recordings from monkey frontal eye field (FEF), we show that this rise-to-threshold process starts from a dynamic initial state that already contains other incipient, internally driven motor plans, which compete with the target-driven activity to varying degrees. The ensuing conflict resolution process, which manifests in subtle covariations between baseline activity, build-up rate, and threshold, consists of fundamentally deterministic interactions, and explains the observed RT distributions while invoking only a small amount of intrinsic randomness. PMID:29652247

  18. Comparison of virtual reality versus physical reality on movement characteristics of persons with Parkinson's disease: effects of moving targets.

    PubMed

    Wang, Ching-Yi; Hwang, Wen-Juh; Fang, Jing-Jing; Sheu, Ching-Fan; Leong, Iat-Fai; Ma, Hui-Ing

    2011-08-01

    To compare the performance of reaching for stationary and moving targets in virtual reality (VR) and physical reality in persons with Parkinson's disease (PD). A repeated-measures design in which all participants reached in physical reality and VR under 5 conditions: 1 stationary ball condition and 4 conditions with the ball moving at different speeds. University research laboratory. Persons with idiopathic PD (n=29) and age-matched controls (n=25). Not applicable. Success rates and kinematics of arm movement (movement time, amplitude of peak velocity, and percentage of movement time for acceleration phase). In both VR and physical reality, the PD group had longer movement time (P<.001) and lower peak velocity (P<.001) than the controls when reaching for stationary balls. When moving targets were provided, the PD group improved more than the controls did in movement time (P<.001) and peak velocity (P<.001), and reached a performance level similar to that of the controls. Except for the fastest moving ball condition (0.5-s target viewing time), which elicited worse performance in VR than in physical reality, most cueing conditions in VR elicited performance generally similar to those in physical reality. Although slower than the controls when reaching for stationary balls, persons with PD increased movement speed in response to fast moving balls in both VR and physical reality. This suggests that with an appropriate choice of cueing speed, VR is a promising tool for providing visual motion stimuli to improve movement speed in persons with PD. More research on the long-term effect of this type of VR training program is needed. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  19. Prediction of the body rotation-induced torques on the arm during reaching movements: evidence from a proprioceptively deafferented subject.

    PubMed

    Guillaud, Etienne; Simoneau, Martin; Blouin, Jean

    2011-06-01

    Reaching for a target while rotating the trunk generates substantial Coriolis and centrifugal torques that push the arm in the opposite direction of the rotations. These torques rarely perturb movement accuracy, suggesting that they are compensated for during the movement. Here we tested whether signals generated during body motion (e.g., vestibular) can be used to predict the torques induced by the body rotation and to modify the motor commands accordingly. We asked a deafferented subject to reach for a memorized visual target in darkness. At the onset of the reaching, the patient was rotated 25° or 40° in the clockwise or the counterclockwise directions. During the rotation, the patient's head remained either fixed in space (Head-Fixed condition) or fixed on the trunk (Head Rotation condition). At the rotation onset, the deafferented patient's hand largely deviated from the mid-sagittal plane in both conditions. The hand deviations were compensated for in the Head Rotation condition only. These results highlight the computational faculty of the brain and show that body rotation-related information can be processed for predicting the consequence of the rotation dynamics on the reaching arm movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Dizio, P.

    1998-01-01

    We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.

  1. Motor adaptation to Coriolis force perturbations of reaching movements: endpoint but not trajectory adaptation transfers to the nonexposed arm

    NASA Technical Reports Server (NTRS)

    Dizio, P.; Lackner, J. R.

    1995-01-01

    1. Reaching movements made in a rotating room generate Coriolis forces that are directly proportional to the cross product of the room's angular velocity and the arm's linear velocity. Such Coriolis forces are inertial forces not involving mechanical contact with the arm. 2. We measured the trajectories of arm movements made in darkness to a visual target that was extinguished at the onset of each reach. Prerotation subjects pointed with both the right and left arms in alternating sets of eight movements. During rotation at 10 rpm, the subjects reached only with the right arm. Postrotation, the subjects pointed with the left and right arms, starting with the left, in alternating sets of eight movements. 3. The initial perrotary reaching movements of the right arm were highly deviated both in movement path and endpoint relative to the prerotation reaches of the right arm. With additional movements, subjects rapidly regained straight movement paths and accurate endpoints despite the absence of visual or tactile feedback about reaching accuracy. The initial postrotation reaches of the left arm followed straight paths to the wrong endpoint. The initial postrotation reaches of the right arm had paths with mirror image curvature to the initial perrotation reaches of the right arm but went to the correct endpoint. 4. These observations are inconsistent with current equilibrium point models of movement control. Such theories predict accurate reaches under our experimental conditions. Our observations further show independent implementation of movement and posture, as evidenced by transfer of endpoint adaptation to the nonexposed arm without transfer of path adaptation. Endpoint control may occur at a relatively central stage that represents general constraints such as gravitoinertial force background or egocentric direction relative to both arms, and control of path may occur at a more peripheral stage that represents moments of inertia and muscle dynamics unique to each limb. 5. Endpoint and path adaptation occur despite the absence both of mechanical contact cues about the perturbing force and visual or tactile cues about movement accuracy. These findings point to the importance of muscle spindle signals, monitoring of motor commands, and possibly joint and tendon receptors in a detailed trajectory monitoring process. Muscle spindle primary and secondary afferent signals may differentially influence adaptation of movement shape and endpoint, respectively.

  2. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  3. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  4. PEPSI spectro-polarimeter for the LBT

    NASA Astrophysics Data System (ADS)

    Strassmeier, Klaus G.; Hofmann, Axel; Woche, Manfred F.; Rice, John B.; Keller, Christoph U.; Piskunov, N. E.; Pallavicini, Roberto

    2003-02-01

    PEPSI (Postham Echelle Polarimetric and Spectroscopic Instrument) is to use the unique feature of the LBT and its powerful double mirror configuration to provide high and extremely high spectral resolution full-Stokes four-vector spectra in the wavelength range 450-1100nm. For the given aperture of 8.4m in single mirror mode and 11.8m in double mirror mode, and at a spectral resolution of 40,000-300,000 as designed for the fiber-fed Echelle spectrograph, a polarimetric accuracy between 10-4 and 10-2 can be reached for targets with visual magnitudes of up to 17th magnitude. A polarimetric accuracy better than 10-4 can only be reached for either targets brighter than approximately 10th magnitude together wiht a substantial trade-off wiht the spectral resolution or with spectrum deconvolution techniques. At 10-2, however, we will be able to observe the brightest AGNs down to 17th magnitude.

  5. A bio-inspired kinematic controller for obstacle avoidance during reaching tasks with real robots.

    PubMed

    Srinivasa, Narayan; Bhattacharyya, Rajan; Sundareswara, Rashmi; Lee, Craig; Grossberg, Stephen

    2012-11-01

    This paper describes a redundant robot arm that is capable of learning to reach for targets in space in a self-organized fashion while avoiding obstacles. Self-generated movement commands that activate correlated visual, spatial and motor information are used to learn forward and inverse kinematic control models while moving in obstacle-free space using the Direction-to-Rotation Transform (DIRECT). Unlike prior DIRECT models, the learning process in this work was realized using an online Fuzzy ARTMAP learning algorithm. The DIRECT-based kinematic controller is fault tolerant and can handle a wide range of perturbations such as joint locking and the use of tools despite not having experienced them during learning. The DIRECT model was extended based on a novel reactive obstacle avoidance direction (DIRECT-ROAD) model to enable redundant robots to avoid obstacles in environments with simple obstacle configurations. However, certain configurations of obstacles in the environment prevented the robot from reaching the target with purely reactive obstacle avoidance. To address this complexity, a self-organized process of mental rehearsals of movements was modeled, inspired by human and animal experiments on reaching, to generate plans for movement execution using DIRECT-ROAD in complex environments. These mental rehearsals or plans are self-generated by using the Fuzzy ARTMAP algorithm to retrieve multiple solutions for reaching each target while accounting for all the obstacles in its environment. The key aspects of the proposed novel controller were illustrated first using simple examples. Experiments were then performed on real robot platforms to demonstrate successful obstacle avoidance during reaching tasks in real-world environments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    PubMed

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.

  7. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    PubMed Central

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618

  8. Target selection biases from recent experience transfer across effectors.

    PubMed

    Moher, Jeff; Song, Joo-Hyun

    2016-02-01

    Target selection is often biased by an observer's recent experiences. However, not much is known about whether these selection biases influence behavior across different effectors. For example, does looking at a red object make it easier to subsequently reach towards another red object? In the current study, we asked observers to find the uniquely colored target object on each trial. Randomly intermixed pre-trial cues indicated the mode of action: either an eye movement or a visually guided reach movement to the target. In Experiment 1, we found that priming of popout, reflected in faster responses following repetition of the target color on consecutive trials, occurred regardless of whether the effector was repeated from the previous trial or not. In Experiment 2, we examined whether an inhibitory selection bias away from a feature could transfer across effectors. While priming of popout reflects both enhancement of the repeated target features and suppression of the repeated distractor features, the distractor previewing effect isolates a purely inhibitory component of target selection in which a previewed color is presented in a homogenous display and subsequently inhibited. Much like priming of popout, intertrial suppression biases in the distractor previewing effect transferred across effectors. Together, these results suggest that biases for target selection driven by recent trial history transfer across effectors. This indicates that representations in memory that bias attention towards or away from specific features are largely independent from their associated actions.

  9. The use of visual cues in gravity judgements on parabolic motion.

    PubMed

    Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan

    2018-06-21

    Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.

  10. Transient visual responses reset the phase of low-frequency oscillations in the skeletomotor periphery.

    PubMed

    Wood, Daniel K; Gu, Chao; Corneil, Brian D; Gribble, Paul L; Goodale, Melvyn A

    2015-08-01

    We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Rapid and sensitive detection of early esophageal squamous cell carcinoma with fluorescence probe targeting dipeptidylpeptidase IV

    PubMed Central

    Onoyama, Haruna; Kamiya, Mako; Kuriki, Yugo; Komatsu, Toru; Abe, Hiroyuki; Tsuji, Yosuke; Yagi, Koichi; Yamagata, Yukinori; Aikou, Susumu; Nishida, Masato; Mori, Kazuhiko; Yamashita, Hiroharu; Fujishiro, Mitsuhiro; Nomura, Sachiyo; Shimizu, Nobuyuki; Fukayama, Masashi; Koike, Kazuhiko; Urano, Yasuteru; Seto, Yasuyuki

    2016-01-01

    Early detection of esophageal squamous cell carcinoma (ESCC) is an important prognosticator, but is difficult to achieve by conventional endoscopy. Conventional lugol chromoendoscopy and equipment-based image-enhanced endoscopy, such as narrow-band imaging (NBI), have various practical limitations. Since fluorescence-based visualization is considered a promising approach, we aimed to develop an activatable fluorescence probe to visualize ESCCs. First, based on the fact that various aminopeptidase activities are elevated in cancer, we screened freshly resected specimens from patients with a series of aminopeptidase-activatable fluorescence probes. The results indicated that dipeptidylpeptidase IV (DPP-IV) is specifically activated in ESCCs, and would be a suitable molecular target for detection of esophageal cancer. Therefore, we designed, synthesized and characterized a series of DPP-IV-activatable fluorescence probes. When the selected probe was topically sprayed onto endoscopic submucosal dissection (ESD) or surgical specimens, tumors were visualized within 5 min, and when the probe was sprayed on biopsy samples, the sensitivity, specificity and accuracy reached 96.9%, 85.7% and 90.5%. We believe that DPP-IV-targeted activatable fluorescence probes are practically translatable as convenient tools for clinical application to enable rapid and accurate diagnosis of early esophageal cancer during endoscopic or surgical procedures. PMID:27245876

  12. Visual and efficient immunosensor technique for advancing biomedical applications of quantum dots on Salmonella detection and isolation

    NASA Astrophysics Data System (ADS)

    Tang, Feng; Pang, Dai-Wen; Chen, Zhi; Shao, Jian-Bo; Xiong, Ling-Hong; Xiang, Yan-Ping; Xiong, Yan; Wu, Kai; Ai, Hong-Wu; Zhang, Hui; Zheng, Xiao-Li; Lv, Jing-Rui; Liu, Wei-Yong; Hu, Hong-Bing; Mei, Hong; Zhang, Zhen; Sun, Hong; Xiang, Yun; Sun, Zi-Yong

    2016-02-01

    It is a great challenge in nanotechnology for fluorescent nanobioprobes to be applied to visually detect and directly isolate pathogens in situ. A novel and visual immunosensor technique for efficient detection and isolation of Salmonella was established here by applying fluorescent nanobioprobes on a specially-designed cellulose-based swab (a solid-phase enrichment system). The selective and chromogenic medium used on this swab can achieve the ultrasensitive amplification of target bacteria and form chromogenic colonies in situ based on a simple biochemical reaction. More importantly, because this swab can serve as an attachment site for the targeted pathogens to immobilize and immunologically capture nanobioprobes, our mAb-conjugated QD bioprobes were successfully applied on the solid-phase enrichment system to capture the fluorescence of targeted colonies under a designed excitation light instrument based on blue light-emitting diodes combined with stereomicroscopy or laser scanning confocal microscopy. Compared with the traditional methods using 4-7 days to isolate Salmonella from the bacterial mixture, this method took only 2 days to do this, and the process of initial screening and preliminary diagnosis can be completed in only one and a half days. Furthermore, the limit of detection can reach as low as 101 cells per mL Salmonella on the background of 105 cells per mL non-Salmonella (Escherichia coli, Proteus mirabilis or Citrobacter freundii, respectively) in experimental samples, and even in human anal ones. The visual and efficient immunosensor technique may be proved to be a favorable alternative for screening and isolating Salmonella in a large number of samples related to public health surveillance.It is a great challenge in nanotechnology for fluorescent nanobioprobes to be applied to visually detect and directly isolate pathogens in situ. A novel and visual immunosensor technique for efficient detection and isolation of Salmonella was established here by applying fluorescent nanobioprobes on a specially-designed cellulose-based swab (a solid-phase enrichment system). The selective and chromogenic medium used on this swab can achieve the ultrasensitive amplification of target bacteria and form chromogenic colonies in situ based on a simple biochemical reaction. More importantly, because this swab can serve as an attachment site for the targeted pathogens to immobilize and immunologically capture nanobioprobes, our mAb-conjugated QD bioprobes were successfully applied on the solid-phase enrichment system to capture the fluorescence of targeted colonies under a designed excitation light instrument based on blue light-emitting diodes combined with stereomicroscopy or laser scanning confocal microscopy. Compared with the traditional methods using 4-7 days to isolate Salmonella from the bacterial mixture, this method took only 2 days to do this, and the process of initial screening and preliminary diagnosis can be completed in only one and a half days. Furthermore, the limit of detection can reach as low as 101 cells per mL Salmonella on the background of 105 cells per mL non-Salmonella (Escherichia coli, Proteus mirabilis or Citrobacter freundii, respectively) in experimental samples, and even in human anal ones. The visual and efficient immunosensor technique may be proved to be a favorable alternative for screening and isolating Salmonella in a large number of samples related to public health surveillance. Electronic supplementary information (ESI) available: One additional figure (Fig. S1), two additional tables (Tables S1 and S2) and additional information. See DOI: 10.1039/c5nr07424j

  13. Predictive Models of Human Visual Processes in Aerosystems.

    DTIC Science & Technology

    1979-11-01

    Physiology, 190:139-154. Wiesel, T. N. and D. H. Hubel, 1966. Spatial and chromatic interactions in the lateral geniculate body of the rhesus monkey...receiving a disproportionate share as reflected in the magnification factor in the retinotopic map of the dorsal lateral geniculate (Malpeli and Baker...optic chiasm before reaching its targets in the dorsal region of the lateral geniculate of the thalmus and the superior colliculus in the brain stem

  14. Fingerprints selection for topological localization

    NASA Astrophysics Data System (ADS)

    Popov, Vladimir

    2017-07-01

    Problems of visual navigation are extensively studied in contemporary robotics. In particular, we can mention different problems of visual landmarks selection, the problem of selection of a minimal set of visual landmarks, selection of partially distinguishable guards, the problem of placement of visual landmarks. In this paper, we consider one-dimensional color panoramas. Such panoramas can be used for creating fingerprints. Fingerprints give us unique identifiers for visually distinct locations by recovering statistically significant features. Fingerprints can be used as visual landmarks for the solution of various problems of mobile robot navigation. In this paper, we consider a method for automatic generation of fingerprints. In particular, we consider the bounded Post correspondence problem and applications of the problem to consensus fingerprints and topological localization. We propose an efficient approach to solve the bounded Post correspondence problem. In particular, we use an explicit reduction from the decision version of the problem to the satisfiability problem. We present the results of computational experiments for different satisfiability algorithms. In robotic experiments, we consider the average accuracy of reaching of the target point for different lengths of routes and types of fingerprints.

  15. The influence of visual feedback from the recent past on the programming of grip aperture is grasp-specific, shared between hands, and mediated by sensorimotor memory not task set.

    PubMed

    Tang, Rixin; Whitwell, Robert L; Goodale, Melvyn A

    2015-05-01

    Goal-directed movements, such as reaching out to grasp an object, are necessarily constrained by the spatial properties of the target such as its size, shape, and position. For example, during a reach-to-grasp movement, the peak width of the aperture formed by the thumb and fingers in flight (peak grip aperture, PGA) is linearly related to the target's size. Suppressing vision throughout the movement (visual open loop) has a small though significant effect on this relationship. Visual open loop conditions also produce a large increase in the PGA compared to when vision is available throughout the movement (visual closed loop). Curiously, this differential effect of the availability of visual feedback is influenced by the presentation order: the difference in PGA between closed- and open-loop trials is smaller when these trials are intermixed (an effect we have called 'homogenization'). Thus, grasping movements are affected not only by the availability of visual feedback (closed loop or open loop) but also by what happened on the previous trial. It is not clear, however, whether this carry-over effect is mediated through motor (or sensorimotor) memory or through the interference of different task sets for closed-loop and open-loop feedback that determine when the movements are fully specified. We reasoned that sensorimotor memory, but not a task set for closed and open loop feedback, would be specific to the type of response. We tested this prediction in a condition in which pointing to targets was alternated with grasping those same targets. Critically, in this condition, when pointing was performed in open loop, grasping was always performed in closed loop (and vice versa). Despite the fact that closed- and open-loop trials were alternating in this condition, we found no evidence for homogenization of the PGA. Homogenization did occur, however, in a follow-up experiment in which grasping movements and visual feedback were alternated between the left and the right hand, indicating that sensorimotor (or motor) memory can operate both within and between hands when the response type is kept the same. In a final experiment, we ruled out the possibility that simply alternating the hand used to perform the grasp interferes with motor or sensorimotor memory. We did this by showing that when the hand was alternated within a block of exclusively closed- or open-loop trials, homogenization of the PGA did not occur. Taken together, the results suggest that (1) interference from simply switching between task sets for closed or open-loop feedback or from switching between the hands cannot account homogenization in the PGA and that (2) the programming and execution of grasps can borrow not only from grasping movements executed in the past by the same hand, but also from grasping movements executed with the other hand. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. The Role of the Caudal Superior Parietal Lobule in Updating Hand Location in Peripheral Vision: Further Evidence from Optic Ataxia

    PubMed Central

    Granek, Joshua A.; Pisella, Laure; Blangero, Annabelle; Rossetti, Yves; Sergio, Lauren E.

    2012-01-01

    Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information. PMID:23071599

  17. The Sander parallelogram illusion dissociates action and perception despite control for the litany of past confounds.

    PubMed

    Whitwell, Robert L; Goodale, Melvyn A; Merritt, Kate E; Enns, James T

    2018-01-01

    The two visual systems hypothesis proposes that human vision is supported by an occipito-temporal network for the conscious visual perception of the world and a fronto-parietal network for visually-guided, object-directed actions. Two specific claims about the fronto-parietal network's role in sensorimotor control have generated much data and controversy: (1) the network relies primarily on the absolute metrics of target objects, which it rapidly transforms into effector-specific frames of reference to guide the fingers, hands, and limbs, and (2) the network is largely unaffected by scene-based information extracted by the occipito-temporal network for those same targets. These two claims lead to the counter-intuitive prediction that in-flight anticipatory configuration of the fingers during object-directed grasping will resist the influence of pictorial illusions. The research confirming this prediction has been criticized for confounding the difference between grasping and explicit estimates of object size with differences in attention, sensory feedback, obstacle avoidance, metric sensitivity, and priming. Here, we address and eliminate each of these confounds. We asked participants to reach out and pick up 3D target bars resting on a picture of the Sander Parallelogram illusion and to make explicit estimates of the length of those bars. Participants performed their grasps without visual feedback, and were permitted to grasp the targets after making their size-estimates to afford them an opportunity to reduce illusory error with haptic feedback. The results show unequivocally that the effect of the illusion is stronger on perceptual judgments than on grasping. Our findings from the normally-sighted population provide strong support for the proposal that human vision is comprised of functionally and anatomically dissociable systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Effects of sport expertise on representational momentum during timing control.

    PubMed

    Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu

    2015-04-01

    Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.

  19. Cassini Scientist for a Day: a tactile experience

    NASA Astrophysics Data System (ADS)

    Canas, L.; Altobelli, N.

    2012-09-01

    In September 2011, the Cassini spacecraft took images of three targets and a challenge was launched to all students: to choose the one target they thought would provide the best science and to write an essay explaining their reasons (more information on the "Cassini Scientist for a Day" essay contest official webpage in: http://saturn.jpl.nasa.gov/education/scientistforaday10thedition/, run by NASA/JPL) The three targets presented were: Hyperion, Rhea and Titan, and Saturn. The idea behind "Cassini Scientist for a Day: a tactile experience" was to transform each of these images into schematic tactile images, highlighting relevant features apprehended through a tactile key, accompanied by a small text in Braille with some additional information. This initial approach would allow reach a broader community of students, more specifically those with visual impairment disabilities. Through proper implementation and careful study cases the adapted images associated with an explanatory key provide more resources in tactile astronomy. As the 2012 edition approaches a new set of targeted objet images will be once again transformed and adapted to visually impaired students and will aim to reach more students into participate in this international competition and to engage them in a quest to expand their knowledge in the amazing Cassini discoveries and the wonders of Saturn and its moons. As the winning essays will be published on the Cassini website and contest winners invited to participate in a dedicated teleconference with Cassini scientists from NASA's Jet Propulsion Laboratory, this initiative presents a great chance to all visually impaired students and teachers to participate in an exciting experience. These initiatives must be complemented with further information to strengthen the learning experience. However they stand as a good starting point to tackle further astronomical concepts in the classroom, especially this field that sometimes lacks the resources. Although the images are ready, any feedback received is paramount. With this initiative we would like to make a call to all interested in participating in the implementation of this project in their country. All interested parties will have the images provided in their native languages by sending the text on your native language translated from the English version.

  20. Effect of allocentric landmarks on primate gaze behavior in a cue conflict task.

    PubMed

    Li, Jirui; Sajad, Amirsaman; Marino, Robert; Yan, Xiaogang; Sun, Saihong; Wang, Hongying; Crawford, J Douglas

    2017-05-01

    The relative contributions of egocentric versus allocentric cues on goal-directed behavior have been examined for reaches, but not saccades. Here, we used a cue conflict task to assess the effect of allocentric landmarks on gaze behavior. Two head-unrestrained macaques maintained central fixation while a target flashed in one of eight radial directions, set against a continuously present visual landmark (two horizontal/vertical lines spanning the visual field, intersecting at one of four oblique locations 11° from the target). After a 100-ms delay followed by a 100-ms mask, the landmark was displaced by 8° in one of eight radial directions. After a second delay (300-700 ms), the fixation point extinguished, signaling for a saccade toward the remembered target. When the landmark was stable, saccades showed a significant but small (mean 15%) pull toward the landmark intersection, and endpoint variability was significantly reduced. When the landmark was displaced, gaze endpoints shifted significantly, not toward the landmark, but partially (mean 25%) toward a virtual target displaced like the landmark. The landmark had a larger influence when it was closer to initial fixation, and when it shifted away from the target, especially in saccade direction. These findings suggest that internal representations of gaze targets are weighted between egocentric and allocentric cues, and this weighting is further modulated by specific spatial parameters.

  1. The role of lower peripheral visual cues in the visuomotor coordination of locomotion and prehension.

    PubMed

    Graci, Valentina

    2011-10-01

    It has been previously suggested that coupled upper and limb movements need visuomotor coordination to be achieved. Previous studies have not investigated the role that visual cues may play in the coordination of locomotion and prehension. The aim of this study was to investigate if lower peripheral visual cues provide online control of the coordination of locomotion and prehension as they have been showed to do during adaptive gait and level walking. Twelve subjects reached a semi-empty or a full glass with their dominant or non-dominant hand at gait termination. Two binocular visual conditions were investigated: normal vision and lower visual occlusion. Outcome measures were determined using 3D motion capture techniques. Results showed that although the subjects were able to successfully complete the task without spilling the water from the glass under lower visual occlusion, they increased the margin of safety between final foot placements and glass. These findings suggest that lower visual cues are mainly used online to fine tune the trajectory of the upper and lower limbs moving toward the target. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Children's Visual Processing of Egocentric Cues in Action Planning for Reach

    ERIC Educational Resources Information Center

    Cordova, Alberto; Gabbard, Carl

    2011-01-01

    In this study the authors examined children's ability to code visual information into an egocentric frame of reference for planning reach movements. Children and adults estimated reach distance via motor imagery in immediate and response-delay conditions. Actual maximum reach was compared to estimates in multiple locations in peripersonal and…

  3. Feedback and feedforward adaptation to visuomotor delay during reaching and slicing movements.

    PubMed

    Botzer, Lior; Karniel, Amir

    2013-07-01

    It has been suggested that the brain and in particular the cerebellum and motor cortex adapt to represent the environment during reaching movements under various visuomotor perturbations. It is well known that significant delay is present in neural conductance and processing; however, the possible representation of delay and adaptation to delayed visual feedback has been largely overlooked. Here we investigated the control of reaching movements in human subjects during an imposed visuomotor delay in a virtual reality environment. In the first experiment, when visual feedback was unexpectedly delayed, the hand movement overshot the end-point target, indicating a vision-based feedback control. Over the ensuing trials, movements gradually adapted and became accurate. When the delay was removed unexpectedly, movements systematically undershot the target, demonstrating that adaptation occurred within the vision-based feedback control mechanism. In a second experiment designed to broaden our understanding of the underlying mechanisms, we revealed similar after-effects for rhythmic reversal (out-and-back) movements. We present a computational model accounting for these results based on two adapted forward models, each tuned for a specific modality delay (proprioception or vision), and a third feedforward controller. The computational model, along with the experimental results, refutes delay representation in a pure forward vision-based predictor and suggests that adaptation occurred in the forward vision-based predictor, and concurrently in the state-based feedforward controller. Understanding how the brain compensates for conductance and processing delays is essential for understanding certain impairments concerning these neural delays as well as for the development of brain-machine interfaces. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  5. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    NASA Astrophysics Data System (ADS)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  6. Decoding the cortical transformations for visually guided reaching in 3D space.

    PubMed

    Blohm, Gunnar; Keith, Gerald P; Crawford, J Douglas

    2009-06-01

    To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.

  7. Two receptor tyrosine phosphatases dictate the depth of axonal stabilizing layer in the visual system

    PubMed Central

    Takechi, Hiroki; Kawamura, Hinata

    2017-01-01

    Formation of a functional neuronal network requires not only precise target recognition, but also stabilization of axonal contacts within their appropriate synaptic layers. Little is known about the molecular mechanisms underlying the stabilization of axonal connections after reaching their specifically targeted layers. Here, we show that two receptor protein tyrosine phosphatases (RPTPs), LAR and Ptp69D, act redundantly in photoreceptor afferents to stabilize axonal connections to the specific layers of the Drosophila visual system. Surprisingly, by combining loss-of-function and genetic rescue experiments, we found that the depth of the final layer of stable termination relied primarily on the cumulative amount of LAR and Ptp69D cytoplasmic activity, while specific features of their ectodomains contribute to the choice between two synaptic layers, M3 and M6, in the medulla. These data demonstrate how the combination of overlapping downstream but diversified upstream properties of two RPTPs can shape layer-specific wiring. PMID:29116043

  8. Relationship between visual binding, reentry and awareness.

    PubMed

    Koivisto, Mika; Silvanto, Juha

    2011-12-01

    Visual feature binding has been suggested to depend on reentrant processing. We addressed the relationship between binding, reentry, and visual awareness by asking the participants to discriminate the color and orientation of a colored bar (presented either alone or simultaneously with a white distractor bar) and to report their phenomenal awareness of the target features. The success of reentry was manipulated with object substitution masking and backward masking. The results showed that late reentrant processes are necessary for successful binding but not for phenomenal awareness of the bound features. Binding errors were accompanied by phenomenal awareness of the misbound feature conjunctions, demonstrating that they were experienced as real properties of the stimuli (i.e., illusory conjunctions). Our results suggest that early preattentive binding and local recurrent processing enable features to reach phenomenal awareness, while later attention-related reentrant iterations modulate the way in which the features are bound and experienced in awareness. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. An electrocorticographic BCI using code-based VEP for control in video applications: a single-subject study

    PubMed Central

    Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph

    2014-01-01

    A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509

  10. Visual, motor and attentional influences on proprioceptive contributions to perception of hand path rectilinearity during reaching

    PubMed Central

    Scheidt, Robert A.; Lillis, Kyle P.; Emerson, Scott J.

    2010-01-01

    We examined how proprioceptive contributions to perception of hand path straightness are influenced by visual, motor and attentional sources of performance variability during horizontal planar reaching. Subjects held the handle of a robot that constrained goal-directed movements of the hand to paths of controlled curvature. Subjects attempted to detect the presence of hand path curvature during both active (subject-driven) and passive (robot-driven) movements that either required active muscle force production or not. Subjects were less able to discriminate curved from straight paths when actively reaching for a target vs. when the robot moved their hand through the same curved paths. This effect was especially evident during robot-driven movements requiring concurrent activation of lengthening but not shortening muscles. Subjects were less likely to report curvature and were more variable in reporting when movements appeared straight in a novel “visual channel” condition previously shown to block adaptive updating of motor commands in response to deviations from a straight-line hand path. Similarly compromised performance was obtained when subjects simultaneously performed a distracting secondary task (key pressing with the contralateral hand). The effects compounded when these last two treatments were combined. It is concluded that environmental, intrinsic and attentional factors all impact the ability to detect deviations from a rectilinear hand path during goal-directed movement by decreasing proprioceptive contributions to limb state estimation. In contrast, response variability increased only in experimental conditions thought to impose additional attentional demands on the observer. Implications of these results for perception and other sensorimotor behaviors are discussed. PMID:20532489

  11. Robotic System for MRI-Guided Stereotactic Neurosurgery

    PubMed Central

    Li, Gang; Cole, Gregory A.; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Pilitsis, Julie G.; Fischer, Gregory S.

    2015-01-01

    Stereotaxy is a neurosurgical technique that can take several hours to reach a specific target, typically utilizing a mechanical frame and guided by preoperative imaging. An error in any one of the numerous steps or deviations of the target anatomy from the preoperative plan such as brain shift (up to 20 mm), may affect the targeting accuracy and thus the treatment effectiveness. Moreover, because the procedure is typically performed through a small burr hole opening in the skull that prevents tissue visualization, the intervention is basically “blind” for the operator with limited means of intraoperative confirmation that may result in reduced accuracy and safety. The presented system is intended to address the clinical needs for enhanced efficiency, accuracy, and safety of image-guided stereotactic neurosurgery for Deep Brain Stimulation (DBS) lead placement. The work describes a magnetic resonance imaging (MRI)-guided, robotically actuated stereotactic neural intervention system for deep brain stimulation procedure, which offers the potential of reducing procedure duration while improving targeting accuracy and enhancing safety. This is achieved through simultaneous robotic manipulation of the instrument and interactively updated in situ MRI guidance that enables visualization of the anatomy and interventional instrument. During simultaneous actuation and imaging, the system has demonstrated less than 15% signal-to-noise ratio (SNR) variation and less than 0.20% geometric distortion artifact without affecting the imaging usability to visualize and guide the procedure. Optical tracking and MRI phantom experiments streamline the clinical workflow of the prototype system, corroborating targeting accuracy with 3-axis root mean square error 1.38 ± 0.45 mm in tip position and 2.03 ± 0.58° in insertion angle. PMID:25376035

  12. Automatic Online Motor Control Is Intact in Parkinson's Disease With and Without Perceptual Awareness.

    PubMed

    Merritt, Kate E; Seergobin, Ken N; Mendonça, Daniel A; Jenkins, Mary E; Goodale, Melvyn A; MacDonald, Penny A

    2017-01-01

    In the double-step paradigm, healthy human participants automatically correct reaching movements when targets are displaced. Motor deficits are prominent in Parkinson's disease (PD) patients. In the lone investigation of online motor correction in PD using the double-step task, a recent study found that PD patients performed unconscious adjustments appropriately but seemed impaired for consciously-perceived modifications. Conscious perception of target movement was achieved by linking displacement to movement onset. PD-related bradykinesia disproportionately prolonged preparatory phases for movements to original target locations for patients, potentially accounting for deficits. Eliminating this confound in a double-step task, we evaluated the effect of conscious awareness of trajectory change on online motor corrections in PD. On and off dopaminergic therapy, PD patients ( n = 14) and healthy controls ( n = 14) reached to peripheral visual targets that remained stationary or unexpectedly moved during an initial saccade. Saccade latencies in PD are comparable to controls'. Hence, target displacements occurred at equal times across groups. Target jump size affected conscious awareness, confirmed in an independent target displacement judgment task. Small jumps were subliminal, but large target displacements were consciously perceived. Contrary to the previous result, PD patients performed online motor corrections normally and automatically, irrespective of conscious perception. Patients evidenced equivalent movement durations for jump and stay trials, and trajectories for patients and controls were identical, irrespective of conscious perception. Dopaminergic therapy had no effect on performance. In summary, online motor control is intact in PD, unaffected by conscious perceptual awareness. The basal ganglia are not implicated in online corrective responses.

  13. Automatic Online Motor Control Is Intact in Parkinson’s Disease With and Without Perceptual Awareness

    PubMed Central

    Seergobin, Ken N.; Mendonça, Daniel A.

    2017-01-01

    Abstract In the double-step paradigm, healthy human participants automatically correct reaching movements when targets are displaced. Motor deficits are prominent in Parkinson’s disease (PD) patients. In the lone investigation of online motor correction in PD using the double-step task, a recent study found that PD patients performed unconscious adjustments appropriately but seemed impaired for consciously-perceived modifications. Conscious perception of target movement was achieved by linking displacement to movement onset. PD-related bradykinesia disproportionately prolonged preparatory phases for movements to original target locations for patients, potentially accounting for deficits. Eliminating this confound in a double-step task, we evaluated the effect of conscious awareness of trajectory change on online motor corrections in PD. On and off dopaminergic therapy, PD patients (n = 14) and healthy controls (n = 14) reached to peripheral visual targets that remained stationary or unexpectedly moved during an initial saccade. Saccade latencies in PD are comparable to controls’. Hence, target displacements occurred at equal times across groups. Target jump size affected conscious awareness, confirmed in an independent target displacement judgment task. Small jumps were subliminal, but large target displacements were consciously perceived. Contrary to the previous result, PD patients performed online motor corrections normally and automatically, irrespective of conscious perception. Patients evidenced equivalent movement durations for jump and stay trials, and trajectories for patients and controls were identical, irrespective of conscious perception. Dopaminergic therapy had no effect on performance. In summary, online motor control is intact in PD, unaffected by conscious perceptual awareness. The basal ganglia are not implicated in online corrective responses. PMID:29085900

  14. The effect of visual context on manual localization of remembered targets

    NASA Technical Reports Server (NTRS)

    Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.

    1997-01-01

    This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.

  15. Amblyopia (For Parents)

    MedlinePlus

    ... stay. Eye Exams for Kids Kids reach "visual maturity" by about 8 years old; after that, vision ... problems are caught before a child reaches visual maturity. Most screenings are done at the pediatrician's office ...

  16. Interference and Shaping in Sensorimotor Adaptations with Rewards

    PubMed Central

    Darshan, Ran; Leblois, Arthur; Hansel, David

    2014-01-01

    When a perturbation is applied in a sensorimotor transformation task, subjects can adapt and maintain performance by either relying on sensory feedback, or, in the absence of such feedback, on information provided by rewards. For example, in a classical rotation task where movement endpoints must be rotated to reach a fixed target, human subjects can successfully adapt their reaching movements solely on the basis of binary rewards, although this proves much more difficult than with visual feedback. Here, we investigate such a reward-driven sensorimotor adaptation process in a minimal computational model of the task. The key assumption of the model is that synaptic plasticity is gated by the reward. We study how the learning dynamics depend on the target size, the movement variability, the rotation angle and the number of targets. We show that when the movement is perturbed for multiple targets, the adaptation process for the different targets can interfere destructively or constructively depending on the similarities between the sensory stimuli (the targets) and the overlap in their neuronal representations. Destructive interferences can result in a drastic slowdown of the adaptation. As a result of interference, the time to adapt varies non-linearly with the number of targets. Our analysis shows that these interferences are weaker if the reward varies smoothly with the subject's performance instead of being binary. We demonstrate how shaping the reward or shaping the task can accelerate the adaptation dramatically by reducing the destructive interferences. We argue that experimentally investigating the dynamics of reward-driven sensorimotor adaptation for more than one sensory stimulus can shed light on the underlying learning rules. PMID:24415925

  17. Auditory and visual orienting responses in listeners with and without hearing-impairment

    PubMed Central

    Brimijoin, W. Owen; McShefferty, David; Akeroyd, Michael A.

    2015-01-01

    Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0° loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees per second in both groups, corresponding to a rate of change of approximately 1 microsecond of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly-changing velocities, directional reversals, and frequent fixation angle corrections. PMID:20550266

  18. Temporal Dynamics of Visual Attention Measured with Event-Related Potentials

    PubMed Central

    Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2013-01-01

    How attentional modulation on brain activities determines behavioral performance has been one of the most important issues in cognitive neuroscience. This issue has been addressed by comparing the temporal relationship between attentional modulations on neural activities and behavior. Our previous study measured the time course of attention with amplitude and phase coherence of steady-state visual evoked potential (SSVEP) and found that the modulation latency of phase coherence rather than that of amplitude was consistent with the latency of behavioral performance. In this study, as a complementary report, we compared the time course of visual attention shift measured by event-related potentials (ERPs) with that by target detection task. We developed a novel technique to compare ERPs with behavioral results and analyzed the EEG data in our previous study. Two sets of flickering stimulus at different frequencies were presented in the left and right visual hemifields, and a target or distracter pattern was presented randomly at various moments after an attention-cue presentation. The observers were asked to detect targets on the attended stimulus after the cue. We found that two ERP components, P300 and N2pc, were elicited by the target presented at the attended location. Time-course analyses revealed that attentional modulation of the P300 and N2pc amplitudes increased gradually until reaching a maximum and lasted at least 1.5 s after the cue onset, which is similar to the temporal dynamics of behavioral performance. However, attentional modulation of these ERP components started later than that of behavioral performance. Rather, the time course of attentional modulation of behavioral performance was more closely associated with that of the concurrently recorded SSVEPs analyzed. These results suggest that neural activities reflected not by either the P300 or N2pc, but by the SSVEPs, are the source of attentional modulation of behavioral performance. PMID:23976966

  19. The effect of aborting ongoing movements on end point position estimation.

    PubMed

    Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi

    2013-11-01

    The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.

  20. Independent digit movements and precision grip patterns in 1-5-month-old human infants: hand-babbling, including vacuous then self-directed hand and digit movements, precedes targeted reaching.

    PubMed

    Wallace, Patricia S; Whishaw, Ian Q

    2003-01-01

    Previous work has described human reflexive grasp patterns in early infancy and visually guided reaching and grasping in late infancy. There has been no examination of hand movements in the intervening period. This was the purpose of the present study. We video recorded the spontaneous hand and digit movements made by alert infants over their first 5 months of age. Over this period, spontaneous hand and digit movements developed from fists to almost continuous, vacuous movements and then to self-directed grasping movements. Amongst the many hand and digit movements observed, four grasping patterns emerged during this period: fists, pre-precision grips associated with numerous digit postures, precision grips including the pincer grasp, and self-directed grasps. The finding that a wide range of independent digit movements and grasp patterns are displayed spontaneously by infants within their first 5 months of age is discussed in relation to the development of the motor system, including the suggestion that direct connections of the pyramidal tract are functional relatively early in infancy. It is also suggested that hand babbling, consisting of first vacuous and then self-directed movements, is preparatory to targeted reaching.

  1. Reaching to virtual targets: The oblique effect reloaded in 3-D.

    PubMed

    Kaspiris-Rousellis, Christos; Siettos, Constantinos I; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2017-02-20

    Perceiving and reproducing direction of visual stimuli in 2-D space produces the visual oblique effect, which manifests as increased precision in the reproduction of cardinal compared to oblique directions. A second cognitive oblique effect emerges when stimulus information is degraded (such as when reproducing stimuli from memory) and manifests as a systematic distortion where reproduced directions close to the cardinal axes deviate toward the oblique, leading to space expansion at cardinal and contraction at oblique axes. We studied the oblique effect in 3-D using a virtual reality system to present a large number of stimuli, covering the surface of an imaginary half sphere, to which subjects had to reach. We used two conditions, one with no delay (no-memory condition) and one where a three-second delay intervened between stimulus presentation and movement initiation (memory condition). A visual oblique effect was observed for the reproduction of cardinal directions compared to oblique, which did not differ with memory condition. A cognitive oblique effect also emerged, which was significantly larger in the memory compared to the no-memory condition, leading to distortion of directional space with expansion near the cardinal axes and compression near the oblique axes on the hemispherical surface. This effect provides evidence that existing models of 2-D directional space categorization could be extended in the natural 3-D space. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Development of voice navigation system for the visually impaired by using IC tags.

    PubMed

    Takatori, Norihiko; Nojima, Kengo; Matsumoto, Masashi; Yanashima, Kenji; Magatani, Kazushige

    2006-01-01

    There are about 300,000 visually impaired persons in Japan. Most of them are old persons and, cannot become skillful in using a white cane, even if they make effort to learn how to use a white cane. Therefore, some guiding system that supports the independent activities of the visually impaired are required. In this paper, we will describe about a developed white cane system that supports the independent walking of the visually impaired in the indoor space. This system is composed of colored navigation lines that include IC tags and an intelligent white cane that has a navigation computer. In our system colored navigation lines that are put on the floor of the target space from the start point to the destination and IC tags that are set at the landmark point are used for indication of the route to the destination. The white cane has a color sensor, an IC tag transceiver and a computer system that includes a voice processor. This white cane senses the navigation line that has target color by a color sensor. When a color sensor finds the target color, the white cane informs a white cane user that he/she is on the navigation line by vibration. So, only following this vibration, the user can reach the destination. However, at some landmark points, guidance is necessary. At these points, an IC tag is set under the navigation line. The cane makes communication with the tag and informs the user about the land mark pint by pre recorded voice. Ten normal subjects who were blindfolded were tested with our developed system. All of them could walk along navigation line. And the IC tag information system worked well. Therefore, we have concluded that our system will be a very valuable one to support activities of the visually impaired.

  3. Reaching back: the relative strength of the retroactive emotional attentional blink

    PubMed Central

    Ní Choisdealbha, Áine; Piech, Richard M.; Fuller, John K.; Zald, David H.

    2017-01-01

    Visual stimuli with emotional content appearing in close temporal proximity either before or after a target stimulus can hinder conscious perceptual processing of the target via an emotional attentional blink (EAB). This occurs for targets that appear after the emotional stimulus (forward EAB) and for those appearing before the emotional stimulus (retroactive EAB). Additionally, the traditional attentional blink (AB) occurs because detection of any target hinders detection of a subsequent target. The present study investigated the relations between these different attentional processes. Rapid sequences of landscape images were presented to thirty-one male participants with occasional landscape targets (rotated images). For the forward EAB, emotional or neutral distractor images of people were presented before the target; for the retroactive EAB, such images were also targets and presented after the landscape target. In the latter case, this design allowed investigation of the AB as well. Erotic and gory images caused more EABs than neutral images, but there were no differential effects on the AB. This pattern is striking because while using different target categories (rotated landscapes, people) appears to have eliminated the AB, the retroactive EAB still occurred, offering additional evidence for the power of emotional stimuli over conscious attention. PMID:28255172

  4. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  5. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Treesearch

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  6. Asynchronous BCI control using high-frequency SSVEP.

    PubMed

    Diez, Pablo F; Mut, Vicente A; Avila Perona, Enrique M; Laciar Leber, Eric

    2011-07-14

    Steady-State Visual Evoked Potential (SSVEP) is a visual cortical response evoked by repetitive stimuli with a light source flickering at frequencies above 4 Hz and could be classified into three ranges: low (up to 12 Hz), medium (12-30) and high frequency (> 30 Hz). SSVEP-based Brain-Computer Interfaces (BCI) are principally focused on the low and medium range of frequencies whereas there are only a few projects in the high-frequency range. However, they only evaluate the performance of different methods to extract SSVEP. This research proposed a high-frequency SSVEP-based asynchronous BCI in order to control the navigation of a mobile object on the screen through a scenario and to reach its final destination. This could help impaired people to navigate a robotic wheelchair. There were three different scenarios with different difficulty levels (easy, medium and difficult). The signal processing method is based on Fourier transform and three EEG measurement channels. The research obtained accuracies ranging in classification from 65% to 100% with Information Transfer Rate varying from 9.4 to 45 bits/min. Our proposed method allows all subjects participating in the study to control the mobile object and to reach a final target without prior training.

  7. Lateralization of visually guided detour behaviour in the common chameleon, Chamaeleo chameleon, a reptile with highly independent eye movements.

    PubMed

    Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi

    2013-11-01

    Chameleons (Chamaeleonidae, reptilia), in common with most ectotherms, show full optic nerve decussation and sparse inter-hemispheric commissures. Chameleons are unique in their capacity for highly independent, large-amplitude eye movements. We address the question: Do common chameleons, Chamaeleo chameleon, during detour, show patterns of lateralization of motion and of eye use that differ from those shown by other ectotherms? To reach a target (prey) in passing an obstacle in a Y-maze, chameleons were required to make a left or a right detour. We analyzed the direction of detours and eye use and found that: (i) individuals differed in their preferred detour direction, (ii) eye use was lateralized at the group level, with significantly longer durations of viewing the target with the right eye, compared with the left eye, (iii) during left side, but not during right side, detours the durations of viewing the target with the right eye were significantly longer than the durations with the left eye. Thus, despite the uniqueness of chameleons' visual system, they display patterns of lateralization of motion and of eye use, typical of other ectotherms. These findings are discussed in relation to hemispheric functions. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Processing spatial layout by perception and sensorimotor interaction.

    PubMed

    Bridgeman, Bruce; Hoover, Merrit

    2008-06-01

    Everyone has the feeling that perception is usually accurate - we apprehend the layout of the world without significant error, and therefore we can interact with it effectively. Several lines of experimentation, however, show that perceived layout is seldom accurate enough to account for the success of visually guided behaviour. A visual world that has more texture on one side, for example, induces a shift of the body's straight ahead to that side and a mislocalization of a small target to the opposite side. Motor interaction with the target remains accurate, however, as measured by a jab with the finger. Slopes of hills are overestimated, even while matching the slopes of the same hills with the forearm is more accurate. The discrepancy shrinks as the estimated range is reduced, until the two estimates are hardly discrepant for a segment of a slope within arm's reach. From an evolutionary standpoint, the function of perception is not to provide an accurate physical layout of the world, but to inform the planning of future behaviour. Illusions - inaccuracies in perception - are perceived as such only when they can be verified by objective means, such as measuring the slope of a hill, the range of a landmark, or the location of a target. Normally such illusions are not checked and are accepted as reality without contradiction.

  9. Study of target and non-target interplay in spatial attention task.

    PubMed

    Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree

    2018-02-01

    Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.

  10. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms

    PubMed Central

    Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe

    2017-01-01

    Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233

  11. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems.

    PubMed

    Zeki, Semir

    2016-10-01

    Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  12. Selective uptake of single-walled carbon nanotubes by circulating monocytes for enhanced tumour delivery

    NASA Astrophysics Data System (ADS)

    Smith, Bryan Ronain; Ghosn, Eliver Eid Bou; Rallapalli, Harikrishna; Prescher, Jennifer A.; Larson, Timothy; Herzenberg, Leonore A.; Gambhir, Sanjiv Sam

    2014-06-01

    In cancer imaging, nanoparticle biodistribution is typically visualized in living subjects using `bulk' imaging modalities such as magnetic resonance imaging, computerized tomography and whole-body fluorescence. Accordingly, nanoparticle influx is observed only macroscopically, and the mechanisms by which they target cancer remain elusive. Nanoparticles are assumed to accumulate via several targeting mechanisms, particularly extravasation (leakage into tumour). Here, we show that, in addition to conventional nanoparticle-uptake mechanisms, single-walled carbon nanotubes are almost exclusively taken up by a single immune cell subset, Ly-6Chi monocytes (almost 100% uptake in Ly-6Chi monocytes, below 3% in all other circulating cells), and delivered to the tumour in mice. We also demonstrate that a targeting ligand (RGD) conjugated to nanotubes significantly enhances the number of single-walled carbon nanotube-loaded monocytes reaching the tumour (P < 0.001, day 7 post-injection). The remarkable selectivity of this tumour-targeting mechanism demonstrates an advanced immune-based delivery strategy for enhancing specific tumour delivery with substantial penetration.

  13. Subliminal number priming within and across the visual and auditory modalities.

    PubMed

    Kouider, Sid; Dehaene, Stanislas

    2009-01-01

    Whether masked number priming involves a low-level sensorimotor route or an amodal semantic level of processing remains highly debated. Several alternative interpretations have been put forward, proposing either that masked number priming is solely a byproduct of practice with numbers, or that stimulus awareness was underestimated. In a series of four experiments, we studied whether repetition and congruity priming for numbers reliably extend to novel (i.e., unpracticed) stimuli and whether priming transfers from a visual prime to an auditory target, even when carefully controlling for stimulus awareness. While we consistently observed cross-modal priming, the generalization to novel stimuli was weaker and reached significance only when considering the whole set of experiments. We conclude that number priming does involve an amodal, semantic level of processing, but is also modulated by task settings.

  14. PMv Neuronal Firing May Be Driven by a Movement Command Trajectory within Multidimensional Gaussian Fields.

    PubMed

    Agarwal, Rahul; Thakor, Nitish V; Sarma, Sridevi V; Massaquoi, Steve G

    2015-06-24

    The premotor cortex (PM) is known to be a site of visuo-somatosensory integration for the production of movement. We sought to better understand the ventral PM (PMv) by modeling its signal encoding in greater detail. Neuronal firing data was obtained from 110 PMv neurons in two male rhesus macaques executing four reach-grasp-manipulate tasks. We found that in the large majority of neurons (∼90%) the firing patterns across the four tasks could be explained by assuming that a high-dimensional position/configuration trajectory-like signal evolving ∼250 ms before movement was encoded within a multidimensional Gaussian field (MGF). Our findings are consistent with the possibility that PMv neurons process a visually specified reference command for the intended arm/hand position trajectory with respect to a proprioceptively or visually sensed initial configuration. The estimated MGF were (hyper) disc-like, such that each neuron's firing modulated strongly only with commands that evolved along a single direction within position/configuration space. Thus, many neurons appeared to be tuned to slices of this input signal space that as a collection appeared to well cover the space. The MGF encoding models appear to be consistent with the arm-referent, bell-shaped, visual target tuning curves and target selectivity patterns observed in PMV visual-motor neurons. These findings suggest that PMv may implement a lookup table-like mechanism that helps translate intended movement trajectory into time-varying patterns of activation in motor cortex and spinal cord. MGFs provide an improved nonlinear framework for potentially decoding visually specified, intended multijoint arm/hand trajectories well in advance of movement. Copyright © 2015 the authors 0270-6474/15/359508-18$15.00/0.

  15. How do visual and postural cues combine for self-tilt perception during slow pitch rotations?

    PubMed

    Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L

    2014-11-01

    Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Functional dissociation between action and perception of object shape in developmental visual object agnosia.

    PubMed

    Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon

    2016-03-01

    According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  18. Comparison of visual survey and seining methods for estimating abundance of an endangered, benthic stream fish

    USGS Publications Warehouse

    Jordan, F.; Jelks, H.L.; Bortone, S.A.; Dorazio, R.M.

    2008-01-01

    We compared visual survey and seining methods for estimating abundance of endangered Okaloosa darters, Etheostoma okaloosae, in 12 replicate stream reaches during August 2001. For each 20-m stream reach, two divers systematically located and marked the position of darters and then a second crew of three to five people came through with a small-mesh seine and exhaustively sampled the same area. Visual surveys required little extra time to complete. Visual counts (24.2 ?? 12.0; mean ?? one SD) considerably exceeded seine captures (7.4 ?? 4.8), and counts from the two methods were uncorrelated. Visual surveys, but not seines, detected the presence of Okaloosa darters at one site with low population densities. In 2003, we performed a depletion removal study in 10 replicate stream reaches to assess the accuracy of the visual survey method. Visual surveys detected 59% of Okaloosa darters present, and visual counts and removal estimates were positively correlated. Taken together, our comparisons indicate that visual surveys more accurately and precisely estimate abundance of Okaloosa darters than seining and more reliably detect presence at low population densities. We recommend evaluation of visual survey methods when designing programs to monitor abundance of benthic fishes in clear streams, especially for threatened and endangered species that may be sensitive to handling and habitat disturbance. ?? 2007 Springer Science+Business Media, Inc.

  19. Electromagnetic image guidance in gynecology: prospective study of a new laparoscopic imaging and targeting technique for the treatment of symptomatic uterine fibroids.

    PubMed

    Galen, Donald I

    2015-10-15

    Uterine fibroids occur singly or as multiple benign tumors originating in the myometrium. Because they vary in size and location, the approach and technique for their identification and surgical management vary. Reference images, such as ultrasound images, magnetic resonance images, and sonohystograms, do not provide real-time intraoperative findings. Electromagnetic image guidance, as incorporated in the Acessa Guidance System, has been cleared by the FDA to facilitate targeting and ablation of uterine fibroids during laparoscopic surgery. This is the first feasibility study to verify the features and usefulness of the guidance system in targeting symptomatic uterine fibroids-particularly hard-to-reach intramural fibroids and those abutting the endometrium. One gynecologic surgeon, who had extensive prior experience in laparoscopic ultrasound-guided identification of fibroids, treated five women with symptomatic uterine fibroids using the Acessa Guidance System. The surgeon evaluated the system and its features in terms of responses to prescribed statements; the responses were analyzed prospectively. The surgeon strongly agreed (96 %) or agreed (4 %) with statements describing the helpfulness of the transducer and handpiece's dynamic animation in targeting each fibroid, reaching the fibroid quickly, visualizing the positions of the transducer and handpiece within the pelvic cavity, and providing the surgeon with confidence when targeting the fibroid even during "out-of-plane" positioning of the handpiece. The surgeon's positive user experience was evident in the guidance system's facilitation of accurate handpiece tip placement during targeting and ablation of uterine fibroids. Continued study of electromagnetic image guidance in the laparoscopic identification and treatment of fibroids is warranted. ClinicalTrials.gov Identifier: NCT01842789.

  20. DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function

    NASA Technical Reports Server (NTRS)

    Wood, Scott J.; Appelbaum, Meghan

    2010-01-01

    The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.

  1. MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation

    NASA Astrophysics Data System (ADS)

    Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo

    2016-07-01

    In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.

  2. SOVEREIGN: An autonomous neural system for incrementally learning planned action sequences to navigate towards a rewarded goal.

    PubMed

    Gnadt, William; Grossberg, Stephen

    2008-06-01

    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.

  3. Sub-diffraction nano manipulation using STED AFM.

    PubMed

    Chacko, Jenu Varghese; Canale, Claudio; Harke, Benjamin; Diaspro, Alberto

    2013-01-01

    In the last two decades, nano manipulation has been recognized as a potential tool of scientific interest especially in nanotechnology and nano-robotics. Contemporary optical microscopy (super resolution) techniques have also reached the nanometer scale resolution to visualize this and hence a combination of super resolution aided nano manipulation ineluctably gives a new perspective to the scenario. Here we demonstrate how specificity and rapid determination of structures provided by stimulated emission depletion (STED) microscope can aid another microscopic tool with capability of mechanical manoeuvring, like an atomic force microscope (AFM) to get topological information or to target nano scaled materials. We also give proof of principle on how high-resolution real time visualization can improve nano manipulation capability within a dense sample, and how STED-AFM is an optimal combination for this job. With these evidences, this article points to future precise nano dissections and maybe even to a nano-snooker game with an AFM tip and fluorospheres.

  4. Thin-slice perception develops slowly.

    PubMed

    Balas, Benjamin; Kanwisher, Nancy; Saxe, Rebecca

    2012-06-01

    Body language and facial gesture provide sufficient visual information to support high-level social inferences from "thin slices" of behavior. Given short movies of nonverbal behavior, adults make reliable judgments in a large number of tasks. Here we find that the high precision of adults' nonverbal social perception depends on the slow development, over childhood, of sensitivity to subtle visual cues. Children and adult participants watched short silent clips in which a target child played with Lego blocks either in the (off-screen) presence of an adult or alone. Participants judged whether the target was playing alone or not; that is, they detected the presence of a social interaction (from the behavior of one participant in that interaction). This task allowed us to compare performance across ages with the true answer. Children did not reach adult levels of performance on this task until 9 or 10 years of age, and we observed an interaction between age and video reversal. Adults and older children benefitted from the videos being played in temporal sequence, rather than reversed, suggesting that adults (but not young children) are sensitive to natural movement in social interactions. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Locations of serial reach targets are coded in multiple reference frames.

    PubMed

    Thompson, Aidan A; Henriques, Denise Y P

    2010-12-01

    Previous work from our lab, and elsewhere, has demonstrated that remembered target locations are stored and updated in an eye-fixed reference frame. That is, reach errors systematically vary as a function of gaze direction relative to a remembered target location, not only when the target is viewed in the periphery (Bock, 1986, known as the retinal magnification effect), but also when the target has been foveated, and the eyes subsequently move after the target has disappeared but prior to reaching (e.g., Henriques, Klier, Smith, Lowy, & Crawford, 1998; Sorrento & Henriques, 2008; Thompson & Henriques, 2008). These gaze-dependent errors, following intervening eye movements, cannot be explained by representations whose frame is fixed to the head, body or even the world. However, it is unknown whether targets presented sequentially would all be coded relative to gaze (i.e., egocentrically/absolutely), or if they would be coded relative to the previous target (i.e., allocentrically/relatively). It might be expected that the reaching movements to two targets separated by 5° would differ by that distance. But, if gaze were to shift between the first and second reaches, would the movement amplitude between the targets differ? If the target locations are coded allocentrically (i.e., the location of the second target coded relative to the first) then the movement amplitude should be about 5°. But, if the second target is coded egocentrically (i.e., relative to current gaze direction), then the reaches to this target and the distances between the subsequent movements should vary systematically with gaze as described above. We found that requiring an intervening saccade to the opposite side of 2 briefly presented targets between reaches to them resulted in a pattern of reaching error that systematically varied as a function of the distance between current gaze and target, and led to a systematic change in the distance between the sequential reach endpoints as predicted by an egocentric frame anchored to the eye. However, the amount of change in this distance was smaller than predicted by a pure eye-fixed representation, suggesting that relative positions of the targets or allocentric coding was also used in sequential reach planning. The spatial coding and updating of sequential reach target locations seems to rely on a combined weighting of multiple reference frames, with one of them centered on the eye. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Developing a de novo targeted knock-in method based on in utero electroporation into the mammalian brain.

    PubMed

    Tsunekawa, Yuji; Terhune, Raymond Kunikane; Fujita, Ikumi; Shitamukai, Atsunori; Suetsugu, Taeko; Matsuzaki, Fumio

    2016-09-01

    Genome-editing technology has revolutionized the field of biology. Here, we report a novel de novo gene-targeting method mediated by in utero electroporation into the developing mammalian brain. Electroporation of donor DNA with the CRISPR/Cas9 system vectors successfully leads to knock-in of the donor sequence, such as EGFP, to the target site via the homology-directed repair mechanism. We developed a targeting vector system optimized to prevent anomalous leaky expression of the donor gene from the plasmid, which otherwise often occurs depending on the donor sequence. The knock-in efficiency of the electroporated progenitors reached up to 40% in the early stage and 20% in the late stage of the developing mouse brain. Furthermore, we inserted different fluorescent markers into the target gene in each homologous chromosome, successfully distinguishing homozygous knock-in cells by color. We also applied this de novo gene targeting to the ferret model for the study of complex mammalian brains. Our results demonstrate that this technique is widely applicable for monitoring gene expression, visualizing protein localization, lineage analysis and gene knockout, all at the single-cell level, in developmental tissues. © 2016. Published by The Company of Biologists Ltd.

  7. Stereotypical reaching movements of the octopus involve both bend propagation and arm elongation.

    PubMed

    Hanassy, S; Botvinnik, A; Flash, T; Hochner, B

    2015-05-13

    The bend propagation involved in the stereotypical reaching movement of the octopus arm has been extensively studied. While these studies have analyzed the kinematics of bend propagation along the arm during its extension, possible length changes have been ignored. Here, the elongation profiles of the reaching movements of Octopus vulgaris were assessed using three-dimensional reconstructions. The analysis revealed that, in addition to bend propagation, arm extension movements involve elongation of the proximal part of the arm, i.e., the section from the base of the arm to the propagating bend. The elongations are quite substantial and highly variable, ranging from an average strain along the arm of -0.12 (i.e. shortening) up to 1.8 at the end of the movement (0.57 ± 0.41, n = 64 movements, four animals). Less variability was discovered in an additional set of experiments on reaching movements (0.64 ± 0.28, n = 30 movements, two animals), where target and octopus positions were kept more stationary. Visual observation and subsequent kinematic analysis suggest that the reaching movements can be broadly segregated into two groups. The first group involves bend propagation beginning at the base of the arm and propagating towards the arm tip. In the second, the bend is formed or present more distally and reaching is achieved mainly by elongation and straightening of the segment proximal to the bend. Only in the second type of movements is elongation significantly positively correlated with the distance of the bend from the target. We suggest that reaching towards a target is generated by a combination of both propagation of a bend along the arm and arm elongation. These two motor primitives may be combined to create a broad spectrum of reaching movements. The dynamical model, which recapitulates the biomechanics of the octopus muscular hydrostatic arm, suggests that achieving the observed elongation requires an extremely low ratio of longitudinal to transverse muscle force (<0.0016 for an average strain along the arm of around 0.5). This was not observed and moreover such extremely low value does not seem to be physiologically possible. Hence the assumptions made in applying the dynamic model to behaviors such as static arm stiffening that leads to arm extension through bend propagation and the patterns of activation used to simulate such behaviors should be modified to account for movements combining bend propagation and arm elongation.

  8. Descending pathways controlling visually guided updating of reaching in cats.

    PubMed

    Pettersson, L-G; Perfiliev, S

    2002-10-01

    This study uses a previously described paradigm (Pettersson et al., 1997) to investigate the ability of cats to change the direction of ongoing reaching when the target is shifted sideways; the effect on the switching latency of spinal cord lesions was investigated. Large ventral lesions transecting the ventral funicle and the ventral half of the lateral funicle gave a 20-30 ms latency prolongation of switching in the medial (right) direction, but less prolongation of switching directed laterally (left), and in one cat the latencies of switching directed laterally were unchanged. It may be inferred that the command for switching in the lateral direction can be mediated by the dorsally located cortico- and rubrospinal tracts whereas the command for short-latency switching in the medial direction is mediated by ventral pathways. A restricted ventral lesion transecting the tectospinal pathway did not change the switching latency. Comparison of different ventral lesions revealed prolongation of the latency if the lesion included a region extending dorsally along the ventral horn and from there ventrally as a vertical strip, so it may be postulated that the command for fast switching, directed medially, is mediated by a reticulospinal pathway within this location. A hypothesis is forwarded suggesting that the visual control is exerted via ponto-cerebellar pathways.

  9. Extending the Cortical Grasping Network: Pre-supplementary Motor Neuron Activity During Vision and Grasping of Objects.

    PubMed

    Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca

    2016-12-01

    Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching-grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. © The Author 2016. Published by Oxford University Press.

  10. Neural control of visual search by frontal eye field: effects of unexpected target displacement on visual selection and saccade preparation.

    PubMed

    Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G

    2009-05-01

    The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.

  11. NASA's Astronomy Education Program: Reaching Diverse Audiences

    NASA Astrophysics Data System (ADS)

    Hasan, Hashima; Smith, Denise Anne; Hertz, Paul; Meinke, Bonnie

    2015-08-01

    An overview will be given of the rich programs developed by NASA to inject the science from it's Astrophysics missions into STEM activities targeted to diverse audiences. For example, Astro4Girls was started as a pilot program during IYA2009. This program partners NASA astrophysics education programs with public libraries to provide NASA-themed hands-on education activities for girls and their families, and has been executed across the country. School curricula and NASA websites have been translated in Spanish; Braille books have been developed for the visually impaired; programs have been developed for the hearing impaired. Special effort has been made to reach underrepresented minorities. Audiences include students, teachers, and the general public through formal and informal education settings, social media and other outlets. NASA Astrophysics education providers include teams embedded in its space flight missions; professionals selected though peer reviewed programs; as well as the Science Mission Directorate Astrophysics Education forum. Representative examples will be presented to demonstrate the reach of NASA education programs, as well as an evaluation of the effectiveness of these programs.

  12. Visualizing Energy on Target: Molecular Dynamics Simulations

    DTIC Science & Technology

    2017-12-01

    ARL-TR-8234 ● DEC 2017 US Army Research Laboratory Visualizing Energy on Target: Molecular Dynamics Simulations by DeCarlos E...return it to the originator. ARL-TR-8234● DEC 2017 US Army Research Laboratory Visualizing Energy on Target: Molecular Dynamics...REPORT TYPE Technical Report 3. DATES COVERED (From - To) 1 October 2015–30 September 2016 4. TITLE AND SUBTITLE Visualizing Energy on Target

  13. Impact of Target Distance, Target Size, and Visual Acuity on the Video Head Impulse Test.

    PubMed

    Judge, Paul D; Rodriguez, Amanda I; Barin, Kamran; Janky, Kristen L

    2018-05-01

    The video head impulse test (vHIT) assesses the vestibulo-ocular reflex. Few have evaluated whether environmental factors or visual acuity influence the vHIT. The purpose of this study was to evaluate the influence of target distance, target size, and visual acuity on vHIT outcomes. Thirty-eight normal controls and 8 subjects with vestibular loss (VL) participated. vHIT was completed at 3 distances and with 3 target sizes. Normal controls were subdivided on the basis of visual acuity. Corrective saccade frequency, corrective saccade amplitude, and gain were tabulated. In the normal control group, there were no significant effects of target size or visual acuity for any vHIT outcome parameters; however, gain increased as target distance decreased. The VL group demonstrated higher corrective saccade frequency and amplitude and lower gain as compared with controls. In conclusion, decreasing target distance increases gain for normal controls but not subjects with VL. Preliminarily, visual acuity does not affect vHIT outcomes.

  14. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved

  15. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  16. Mechanisms of Short-Term Training-Induced Reaching Improvement in Severely Hemiparetic Stroke Patients: A TMS Study

    PubMed Central

    Harris-Love, Michelle L.; Morton, Susanne M.; Perez, Monica A.; Cohen, Leonardo G.

    2011-01-01

    Background The neurophysiological mechanisms underlying improved upper-extremity motor skills have been partially investigated in patients with good motor recovery but are poorly understood in more impaired individuals, the majority of stroke survivors. Objective The authors studied changes in primary motor cortex (M1) excitability (motor evoked potentials [MEPs], contralateral and ipsilateral silent periods [CSPs and ISPs] using transcranial magnetic stimulation [TMS]) associated with training-induced reaching improvement in stroke patients with severe arm paresis (n = 11; Upper-Extremity Fugl-Meyer score (F-M) = 27 ± 6). Methods All patients underwent a single session of reaching training focused on moving the affected hand from a resting site to a target placed at 80% of maximum forward reaching amplitude in response to a visual “GO” cue. Triceps contribute primarily as agonist and biceps primarily as antagonist to the trained forward reaching movement. Response times were recorded for each reaching movement. Results Preceding training (baseline), greater interhemispheric inhibition (measured by ISP) in the affected triceps muscle, reflecting inhibition from the nonlesioned to the lesioned M1, was observed in patients with lower F-M scores (more severe motor impairment). Training-induced improvements in reaching were greater in patients with slower response times at baseline. Increased MEP amplitudes and decreased ISPs and CSPs were observed in the affected triceps but not in the biceps muscle after training. Conclusion These results indicate that along with training-induced motor improvements, training-specific modulation of intrahemispheric and interhemispheric mechanisms occurs after reaching practice in chronic stroke patients with substantial arm impairment. PMID:21343522

  17. Visuomotor signals for reaching movements in the rostro-dorsal sector of the monkey thalamic reticular nucleus.

    PubMed

    Saga, Yosuke; Nakayama, Yoshihisa; Inoue, Ken-Ichi; Yamagata, Tomoko; Hashimoto, Masashi; Tremblay, Léon; Takada, Masahiko; Hoshi, Eiji

    2017-05-01

    The thalamic reticular nucleus (TRN) collects inputs from the cerebral cortex and thalamus and, in turn, sends inhibitory outputs to the thalamic relay nuclei. This unique connectivity suggests that the TRN plays a pivotal role in regulating information flow through the thalamus. Here, we analyzed the roles of TRN neurons in visually guided reaching movements. We first used retrograde transneuronal labeling with rabies virus, and showed that the rostro-dorsal sector of the TRN (TRNrd) projected disynaptically to the ventral premotor cortex (PMv). In other experiments, we recorded neurons from the TRNrd or PMv while monkeys performed a visuomotor task. We found that neurons in the TRNrd and PMv showed visual-, set-, and movement-related activity modulation. These results indicate that the TRNrd, as well as the PMv, is involved in the reception of visual signals and in the preparation and execution of reaching movements. The fraction of neurons that were non-selective for the location of visual signals or the direction of reaching movements was greater in the TRNrd than in the PMv. Furthermore, the fraction of neurons whose activity increased from the baseline was greater in the TRNrd than in the PMv. The timing of activity modulation of visual-related and movement-related neurons was similar in TRNrd and PMv neurons. Overall, our data suggest that TRNrd neurons provide motor thalamic nuclei with inhibitory inputs that are predominantly devoid of spatial selectivity, and that these signals modulate how these nuclei engage in both sensory processing and motor output during visually guided reaching behavior. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.

    PubMed

    Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta

    2015-05-01

    Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).

  19. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating.

    PubMed

    Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.

  20. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating

    PubMed Central

    Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100

  1. Modulation of error-sensitivity during a prism adaptation task in people with cerebellar degeneration

    PubMed Central

    Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru

    2015-01-01

    Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179

  2. Directional learning, but no spatial mapping by rats performing a navigational task in an inverted orientation

    PubMed Central

    Valerio, Stephane; Clark, Benjamin J.; Chan, Jeremy H. M.; Frost, Carlton P.; Harris, Mark J.; Taube, Jeffrey S.

    2010-01-01

    Previous studies have identified neurons throughout the rat limbic system that fire as a function of the animal's head direction (HD). This HD signal is particularly robust when rats locomote in the horizontal and vertical planes, but is severely attenuated when locomoting upside-down (Calton & Taube, 2005). Given the hypothesis that the HD signal represents an animal's sense of its directional heading, we evaluated whether rats could accurately navigate in an inverted (upside-down) orientation. The task required the animals to find an escape hole while locomoting inverted on a circular platform suspended from the ceiling. In experiment 1, Long-Evans rats were trained to navigate to the escape hole by locomoting from either one or four start points. Interestingly, no animals from the 4-start point group reached criterion, even after 30 days of training. Animals in the 1-start point group reached criterion after about 6 training sessions. In Experiment 2, probe tests revealed that animals navigating from either 1- or 2-start points utilized distal visual landmarks for accurate orientation. However, subsequent probe tests revealed that their performance was markedly attenuated when required to navigate to the escape hole from a novel starting point. This absence of flexibility while navigating upside-down was confirmed in experiment 3 where we show that the rats do not learn to reach a place, but instead learn separate trajectories to the target hole(s). Based on these results we argue that inverted navigation primarily involves a simple directional strategy based on visual landmarks. PMID:20109566

  3. Musical space synesthesia: automatic, explicit and conceptual connections between musical stimuli and space.

    PubMed

    Akiva-Kabiri, Lilach; Linkovski, Omer; Gertner, Limor; Henik, Avishai

    2014-08-01

    In musical-space synesthesia, musical pitches are perceived as having a spatially defined array. Previous studies showed that symbolic inducers (e.g., numbers, months) can modulate response according to the inducer's relative position on the synesthetic spatial form. In the current study we tested two musical-space synesthetes and a group of matched controls on three different tasks: musical-space mapping, spatial cue detection and a spatial Stroop-like task. In the free mapping task, both synesthetes exhibited a diagonal organization of musical pitch tones rising from bottom left to the top right. This organization was found to be consistent over time. In the subsequent tasks, synesthetes were asked to ignore an auditory or visually presented musical pitch (irrelevant information) and respond to a visual target (i.e., an asterisk) on the screen (relevant information). Compatibility between musical pitch and the target's spatial location was manipulated to be compatible or incompatible with the synesthetes' spatial representations. In the spatial cue detection task participants had to press the space key immediately upon detecting the target. In the Stroop-like task, they had to reach the target by using a mouse cursor. In both tasks, synesthetes' performance was modulated by the compatibility between irrelevant and relevant spatial information. Specifically, the target's spatial location conflicted with the spatial information triggered by the irrelevant musical stimulus. These results reveal that for musical-space synesthetes, musical information automatically orients attention according to their specific spatial musical-forms. The present study demonstrates the genuineness of musical-space synesthesia by revealing its two hallmarks-automaticity and consistency. In addition, our results challenge previous findings regarding an implicit vertical representation for pitch tones in non-synesthete musicians. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Early visuomotor representations revealed from evoked local field potentials in motor and premotor cortical areas.

    PubMed

    O'Leary, John G; Hatsopoulos, Nicholas G

    2006-09-01

    Local field potentials (LFPs) recorded from primary motor cortex (MI) have been shown to be tuned to the direction of visually guided reaching movements, but MI LFPs have not been shown to be tuned to the direction of an upcoming movement during the delay period that precedes movement in an instructed-delay reaching task. Also, LFPs in dorsal premotor cortex (PMd) have not been investigated in this context. We therefore recorded LFPs from MI and PMd of monkeys (Macaca mulatta) and investigated whether these LFPs were tuned to the direction of the upcoming movement during the delay period. In three frequency bands we identified LFP activity that was phase-locked to the onset of the instruction stimulus that specified the direction of the upcoming reach. The amplitude of this activity was often tuned to target direction with tuning widths that varied across different electrodes and frequency bands. Single-trial decoding of LFPs demonstrated that prediction of target direction from this activity was possible well before the actual movement is initiated. Decoding performance was significantly better in the slowest-frequency band compared with that in the other two higher-frequency bands. Although these results demonstrate that task-related information is available in the local field potentials, correlations among these signals recorded from a densely packed array of electrodes suggests that adequate decoding performance for neural prosthesis applications may be limited as the number of simultaneous electrode recordings is increased.

  5. Calibration of visually guided reaching is driven by error-corrective learning and internal dynamics.

    PubMed

    Cheng, Sen; Sabes, Philip N

    2007-04-01

    The sensorimotor calibration of visually guided reaching changes on a trial-to-trial basis in response to random shifts in the visual feedback of the hand. We show that a simple linear dynamical system is sufficient to model the dynamics of this adaptive process. In this model, an internal variable represents the current state of sensorimotor calibration. Changes in this state are driven by error feedback signals, which consist of the visually perceived reach error, the artificial shift in visual feedback, or both. Subjects correct for > or =20% of the error observed on each movement, despite being unaware of the visual shift. The state of adaptation is also driven by internal dynamics, consisting of a decay back to a baseline state and a "state noise" process. State noise includes any source of variability that directly affects the state of adaptation, such as variability in sensory feedback processing, the computations that drive learning, or the maintenance of the state. This noise is accumulated in the state across trials, creating temporal correlations in the sequence of reach errors. These correlations allow us to distinguish state noise from sensorimotor performance noise, which arises independently on each trial from random fluctuations in the sensorimotor pathway. We show that these two noise sources contribute comparably to the overall magnitude of movement variability. Finally, the dynamics of adaptation measured with random feedback shifts generalizes to the case of constant feedback shifts, allowing for a direct comparison of our results with more traditional blocked-exposure experiments.

  6. Effects of strabismic amblyopia and strabismus without amblyopia on visuomotor behavior, I: saccadic eye movements.

    PubMed

    Niechwiej-Szwedo, Ewa; Chandrakumar, Manokaraananthan; Goltz, Herbert C; Wong, Agnes M F

    2012-11-01

    It has previously been shown that anisometropic amblyopia affects the programming and execution of saccades. The aim of the current study was to investigate the impact of strabismic amblyopia on saccade performance. Fourteen adults with strabismic amblyopia, 13 adults with strabismus without amblyopia, and 14 visually normal adults performed saccades and reach-to-touch movements to targets presented at ± 5° and ± 10° eccentricity during binocular and monocular viewing. Latency, amplitude, and peak velocity of primary and secondary saccades were measured. In contrast to visually normal participants who had shorter primary saccade latency during binocular viewing, no binocular advantage was found in patients with strabismus with or without amblyopia. Patients with amblyopia had longer saccade latency during amblyopic eye viewing (P < 0.0001); however, there were no significant differences in saccade amplitude precision among the three groups across viewing conditions. Further analysis showed that only patients with severe amblyopia and no stereopsis (n = 4) exhibited longer latency (which was more pronounced for more central targets; P < 0.0001), and they also had reduced amplitude precision during amblyopic eye viewing. In contrast, patients with mild amblyopia (n = 5) and no stereopsis had normal latency and reduced precision during amblyopic eye viewing (P < 0.001), whereas those with gross stereopsis (n = 5) had normal latency and precision. There were no differences in peak velocity among the groups. Distinct patterns of saccade performance according to different levels of visual acuity and stereoscopic losses in strabismic amblyopia were found. These findings were in contrast to those in anisometropic amblyopia in which the altered saccade performance was independent of the extent of visual acuity or stereoscopic deficits. These results were most likely due to different long-term sensory suppression mechanisms in strabismic versus anisometropic amblyopia.

  7. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  8. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  9. Tunnel vision: sharper gradient of spatial attention in autism.

    PubMed

    Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I

    2013-04-17

    Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.

  10. Global Statistical Learning in a Visual Search Task

    ERIC Educational Resources Information Center

    Jones, John L.; Kaschak, Michael P.

    2012-01-01

    Locating a target in a visual search task is facilitated when the target location is repeated on successive trials. Global statistical properties also influence visual search, but have often been confounded with local regularities (i.e., target location repetition). In two experiments, target locations were not repeated for four successive trials,…

  11. Meet our Neighbours - a tactile experience

    NASA Astrophysics Data System (ADS)

    Canas, L.; Lobo Correia, A.

    2013-09-01

    Planetary science is a key field in astronomy that draws lots of attention and that engages large amounts of enthusiasts. On its essence, it is a visual science and the current resources and activities for the inclusion of visually impaired children, although increasing, are still costly and somewhat scarce. Therefore there is a paramount need to develop more low cost resources in order to provide experiences that can reach all, even the more socially deprived communities. "Meet our neighbours!-a tactile experience", plans to promote and provide inclusion activities for visually impaired children and their non-visually impaired peers through the use of astronomy hands-on low cost activities. Is aimed for children from the ages of 6 to 12 years old and produce data set 13 tactile images of the main objects of the Solar System that can be used in schools, science centres and outreach associations. Accessing several common problems through tactile resources, with this project we present ways to successfully provide low cost solutions (avoiding the expensive tactile printing costs), promote inclusion and interactive hands-on activities for visually impaired children and their non-visually impaired peers and create dynamic interactions based on oral knowledge transmission between them. Here we describe the process of implementing such initiative near target communities: establishing a bridge between scientists, children and teachers. The struggles and challenges perceived during the project and the enrichment experience of engaging astronomy with these specific groups, broadening horizons in an overall experience accessible to all.

  12. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity.

    PubMed

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-09-01

    At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.

  13. Effects of Pictorial Cues on Reaching Depend on the Distinctiveness of Target Objects

    PubMed Central

    Himmelbach, Marc

    2013-01-01

    There is an ongoing debate under what conditions learned object sizes influence visuomotor control under preserved stereovision. Using meaningful objects (matchboxes of locally well-known brands in the UK) a previous study has nicely shown that the recognition of these objects influences action programming by means of reach amplitude and grasp pre-shaping even under binocular vision. Using the same paradigm, we demonstrated that short-term learning of colour-size associations was not sufficient to induce any visuomotor effects under binocular viewing conditions. Now we used the same matchboxes, for which the familiarity effect was shown in the UK, with German participants who have never seen these objects before. We addressed the question whether simply a high degree of distinctness, or whether instead actual prior familiarity of these objects, are required to affect motor computations. We found that under monocular and binocular viewing conditions the learned size and location influenced the amplitude of the reaching component significantly. In contrast, the maximum grip aperture remained unaffected for binocular vision. We conclude that visual distinctness is sufficient to form reliable associations in short-term learning to influence reaching even for preserved stereovision. Grasp pre-shaping instead seems to be less susceptible to such perceptual effects. PMID:23382882

  14. Feedforward control strategies of subjects with transradial amputation in planar reaching.

    PubMed

    Metzger, Anthony J; Dromerick, Alexander W; Schabowsky, Christopher N; Holley, Rahsaan J; Monroe, Brian; Lum, Peter S

    2010-01-01

    The rate of upper-limb amputations is increasing, and the rejection rate of prosthetic devices remains high. People with upper-limb amputation do not fully incorporate prosthetic devices into their activities of daily living. By understanding the reaching behaviors of prosthesis users, researchers can alter prosthetic devices and develop training protocols to improve the acceptance of prosthetic limbs. By observing the reaching characteristics of the nondisabled arms of people with amputation, we can begin to understand how the brain alters its motor commands after amputation. We asked subjects to perform rapid reaching movements to two targets with and without visual feedback. Subjects performed the tasks with both their prosthetic and nondisabled arms. We calculated endpoint error, trajectory error, and variability and compared them with those of nondisabled control subjects. We found no significant abnormalities in the prosthetic limb. However, we found an abnormal leftward trajectory error (in right arms) in the nondisabled arm of prosthetic users in the vision condition. In the no-vision condition, the nondisabled arm displayed abnormal leftward endpoint errors and abnormally higher endpoint variability. In the vision condition, peak velocity was lower and movement duration was longer in both arms of subjects with amputation. These abnormalities may reflect the cortical reorganization associated with limb loss.

  15. There May Be More to Reaching than Meets the Eye: Re-Thinking Optic Ataxia

    ERIC Educational Resources Information Center

    Jackson, Stephen R.; Newport, Roger; Husain, Masud; Fowlie, Jane E.; O'Donoghue, Michael; Bajaj, Nin

    2009-01-01

    Optic ataxia (OA) is generally thought of as a disorder of visually guided reaching movements that cannot be explained by any simple deficit in visual or motor processing. In this paper we offer a new perspective on optic ataxia; we argue that the popular characterisation of this disorder is misleading and is unrepresentative of the pattern of…

  16. Search guidance is proportional to the categorical specificity of a target cue.

    PubMed

    Schmidt, Joseph; Zelinsky, Gregory J

    2009-10-01

    Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.

  17. Perception of 3-D location based on vision, touch, and extended touch

    PubMed Central

    Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.

    2012-01-01

    Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234

  18. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  19. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  20. Feasibility of 4D flow MR imaging of the brain with either Cartesian y-z radial sampling or k-t SENSE: comparison with 4D Flow MR imaging using SENSE.

    PubMed

    Sekine, Tetsuro; Amano, Yasuo; Takagi, Ryo; Matsumura, Yoshio; Murai, Yasuo; Kumita, Shinichiro

    2014-01-01

    A drawback of time-resolved 3-dimensional phase contrast magnetic resonance (4D Flow MR) imaging is its lengthy scan time for clinical application in the brain. We assessed the feasibility for flow measurement and visualization of 4D Flow MR imaging using Cartesian y-z radial sampling and that using k-t sensitivity encoding (k-t SENSE) by comparison with the standard scan using SENSE. Sixteen volunteers underwent 3 types of 4D Flow MR imaging of the brain using a 3.0-tesla scanner. As the standard scan, 4D Flow MR imaging with SENSE was performed first and then followed by 2 types of acceleration scan-with Cartesian y-z radial sampling and with k-t SENSE. We measured peak systolic velocity (PSV) and blood flow volume (BFV) in 9 arteries, and the percentage of particles arriving from the emitter plane at the target plane in 3 arteries, visually graded image quality in 9 arteries, and compared these quantitative and visual data between the standard scan and each acceleration scan. 4D Flow MR imaging examinations were completed in all but one volunteer, who did not undergo the last examination because of headache. Each acceleration scan reduced scan time by 50% compared with the standard scan. The k-t SENSE imaging underestimated PSV and BFV (P < 0.05). There were significant correlations for PSV and BFV between the standard scan and each acceleration scan (P < 0.01). The percentage of particles reaching the target plane did not differ between the standard scan and each acceleration scan. For visual assessment, y-z radial sampling deteriorated the image quality of the 3 arteries. Cartesian y-z radial sampling is feasible for measuring flow, and k-t SENSE offers sufficient flow visualization; both allow acquisition of 4D Flow MR imaging with shorter scan time.

  1. The Effects of Mirror Feedback during Target Directed Movements on Ipsilateral Corticospinal Excitability

    PubMed Central

    Yarossi, Mathew; Manuweera, Thushini; Adamovich, Sergei V.; Tunik, Eugene

    2017-01-01

    Mirror visual feedback (MVF) training is a promising technique to promote activation in the lesioned hemisphere following stroke, and aid recovery. However, current outcomes of MVF training are mixed, in part, due to variability in the task undertaken during MVF. The present study investigated the hypothesis that movements directed toward visual targets may enhance MVF modulation of motor cortex (M1) excitability ipsilateral to the trained hand compared to movements without visual targets. Ten healthy subjects participated in a 2 × 2 factorial design in which feedback (veridical, mirror) and presence of a visual target (target present, target absent) for a right index-finger flexion task were systematically manipulated in a virtual environment. To measure M1 excitability, transcranial magnetic stimulation (TMS) was applied to the hemisphere ipsilateral to the trained hand to elicit motor evoked potentials (MEPs) in the untrained first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles at rest prior to and following each of four 2-min blocks of 30 movements (B1–B4). Targeted movement kinematics without visual feedback was measured before and after training to assess learning and transfer. FDI MEPs were decreased in B1 and B2 when movements were made with veridical feedback and visual targets were absent. FDI MEPs were decreased in B2 and B3 when movements were made with mirror feedback and visual targets were absent. FDI MEPs were increased in B3 when movements were made with mirror feedback and visual targets were present. Significant MEP changes were not present for the uninvolved ADM, suggesting a task-specific effect. Analysis of kinematics revealed learning occurred in visual target-directed conditions, but transfer was not sensitive to mirror feedback. Results are discussed with respect to current theoretical mechanisms underlying MVF-induced changes in ipsilateral excitability. PMID:28553218

  2. Within-Hemifield Competition in Early Visual Areas Limits the Ability to Track Multiple Objects with Attention

    PubMed Central

    Alvarez, George A.; Cavanagh, Patrick

    2014-01-01

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651

  3. Challenges and opportunities in the high-resolution cryo-EM visualization of microtubules and their binding partners.

    PubMed

    Nogales, Eva; Kellogg, Elizabeth H

    2017-10-01

    As non-crystallizable polymers, microtubules have been the target of cryo-electron microscopy (cryo-EM) studies since the technique was first established. Over the years, image processing strategies have been developed that take care of the unique, pseudo-helical symmetry of the microtubule. With recent progress in data quality and data processing, cryo-EM reconstructions are now reaching resolutions that allow the generation of atomic models of microtubules and the factors that bind them. These include cellular partners that contribute to microtubule cellular functions, or small ligands that interfere with those functions in the treatment of cancer. The stage is set to generate a family portrait for all identified microtubule interacting proteins and to use cryo-EM as a drug development tool in the targeting of tubulin. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Repetition Blindness for Natural Images of Objects with Viewpoint Changes

    PubMed Central

    Buffat, Stéphane; Plantier, Justin; Roumes, Corinne; Lorenceau, Jean

    2013-01-01

    When stimuli are repeated in a rapid serial visual presentation (RSVP), observers sometimes fail to report the second occurrence of a target. This phenomenon is referred to as “repetition blindness” (RB). We report an RSVP experiment with photographs in which we manipulated object viewpoints between the first and second occurrences of a target (0°, 45°, or 90° changes), and spatial frequency (SF) content. Natural images were spatially filtered to produce low, medium, or high SF stimuli. RB was observed for all filtering conditions. Surprisingly, for full-spectrum (FS) images, RB increased significantly as the viewpoint reached 90°. For filtered images, a similar pattern of results was found for all conditions except for medium SF stimuli. These findings suggest that object recognition in RSVP are subtended by viewpoint-specific representations for all spatial frequencies except medium ones. PMID:23346069

  5. Divided attention can enhance memory encoding: the attentional boost effect in implicit memory.

    PubMed

    Spataro, Pietro; Mulligan, Neil W; Rossi-Arnaud, Clelia

    2013-07-01

    Distraction during encoding has long been known to disrupt later memory performance. Contrary to this long-standing result, we show that detecting an infrequent target in a dual-task paradigm actually improves memory encoding for a concurrently presented word, above and beyond the performance reached in the full-attention condition. This absolute facilitation was obtained in 2 perceptual implicit tasks (lexical decision and word fragment completion) but not in a conceptual implicit task (semantic classification). In the case of recognition memory, the facilitation was relative, bringing accuracy in the divided attention condition up to the level of accuracy in the full attention condition. The findings follow from the hypothesis that the attentional boost effect reflects enhanced visual encoding of the study stimulus consequent to the transient orienting response to the dual-task target. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  6. Detailed fetal anatomy assessment in the first trimester at 11, 12 and 13 weeks of gestation.

    PubMed

    Luchi, Carlo; Schifano, Martina; Sacchini, Clara; Nanini, Chiara; Sceusa, Francesca; Capriello, Patrizio; Genazzani, Andrea R

    2012-06-01

    The aim of the present observational study was to evaluate the feasibility of a morphological scan and determine the detection rate of fetal organs, structures and systems in the first trimester of pregnancy. 977 single pregnant women attending our Fetal Medicine Section to undergo first trimester screening for aneuploidies were enrolled and divided into three groups depending on gestational age and crown-rump-length measurement. Scans targeted on a total of 26 fetal anatomical structures were performed by a single operator. The overall detection rate was 96% at 11 weeks and reached 100% at 12 and 13 weeks, with a significant statistical difference between 11 and 12/13 weeks for the majority of the investigated fetal anatomical structures. Evaluation of most part of the fetal anatomical structures is feasible with high accuracy in the first trimester. Visualization of the majority of the targeted fetal organs improves from 11 to 13 weeks.

  7. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    PubMed

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  8. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    PubMed

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  9. Integration of bio-inspired, control-based visual and olfactory data for the detection of an elusive target

    NASA Astrophysics Data System (ADS)

    Duong, Tuan A.; Duong, Nghi; Le, Duong

    2017-01-01

    In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.

  10. Combined visual illusion effects on the perceived index of difficulty and movement outcomes in discrete and continuous fitts' tapping.

    PubMed

    Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin

    2016-01-01

    The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.

  11. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  12. Virtual reality method to analyze visual recognition in mice.

    PubMed

    Young, Brent Kevin; Brennan, Jayden Nicole; Wang, Ping; Tian, Ning

    2018-01-01

    Behavioral tests have been extensively used to measure the visual function of mice. To determine how precisely mice perceive certain visual cues, it is necessary to have a quantifiable measurement of their behavioral responses. Recently, virtual reality tests have been utilized for a variety of purposes, from analyzing hippocampal cell functionality to identifying visual acuity. Despite the widespread use of these tests, the training requirement for the recognition of a variety of different visual targets, and the performance of the behavioral tests has not been thoroughly characterized. We have developed a virtual reality behavior testing approach that can essay a variety of different aspects of visual perception, including color/luminance and motion detection. When tested for the ability to detect a color/luminance target or a moving target, mice were able to discern the designated target after 9 days of continuous training. However, the quality of their performance is significantly affected by the complexity of the visual target, and their ability to navigate on a spherical treadmill. Importantly, mice retained memory of their visual recognition for at least three weeks after the end of their behavioral training.

  13. Preservative-free tafluprost/timolol fixed combination: a new opportunity in the treatment of glaucoma.

    PubMed

    Konstas, Anastasios G P; Holló, Gabor

    2016-06-01

    Medical therapy of glaucoma aims to maintain the patient's visual function and quality of life. This generally commences with monotherapy, but it is often difficult to reach the predetermined target pressure with this approach. Fixed combinations (FCs) are therefore selected as the next step of the medical therapy algorithm. By employing a prostaglandin/timolol fixed combination (PTFC) the desired target 24-hour intraocular pressure can be reached in many glaucoma patients with the convenience of once-a-day administration and the associated high rate of adherence. The current role and value of FCs in the medical therapy of glaucoma is critically appraised. Special attention is paid to the PTFCs and the emerging role of preservative-free PTFCs. This review summarizes existing information on the efficacy and tolerability of the new preservative-free tafluprost/timolol FC (Taptiqom®). The preservative-free tafluprost/timolol FC represents a promising stepwise treatment option for those patients whose intraocular pressure is insufficiently controlled with available monotherapy options. This novel FC has the potential to substantially improve glaucoma management and through evolution of the current glaucoma treatment paradigm, to become a core therapeutic option in the future. Nonetheless, future research is needed to better delineate the therapeutic role of current and future preservative-free FCs in glaucoma therapy.

  14. Effects of aging on pointing movements under restricted visual feedback conditions.

    PubMed

    Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong

    2015-04-01

    The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  16. Dividing time: concurrent timing of auditory and visual events by young and elderly adults.

    PubMed

    McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H

    2010-07-01

    This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.

  17. Comparative Effectiveness of Targeted Prostate Biopsy Using Magnetic Resonance Imaging Ultrasound Fusion Software and Visual Targeting: a Prospective Study.

    PubMed

    Lee, Daniel J; Recabal, Pedro; Sjoberg, Daniel D; Thong, Alan; Lee, Justin K; Eastham, James A; Scardino, Peter T; Vargas, Hebert Alberto; Coleman, Jonathan; Ehdaie, Behfar

    2016-09-01

    We compared the diagnostic outcomes of magnetic resonance-ultrasound fusion and visually targeted biopsy for targeting regions of interest on prostate multiparametric magnetic resonance imaging. Patients presenting for prostate biopsy with regions of interest on multiparametric magnetic resonance imaging underwent magnetic resonance imaging targeted biopsy. For each region of interest 2 visually targeted cores were obtained, followed by 2 cores using a magnetic resonance-ultrasound fusion device. Our primary end point was the difference in the detection of high grade (Gleason 7 or greater) and any grade cancer between visually targeted and magnetic resonance-ultrasound fusion, investigated using McNemar's method. Secondary end points were the difference in detection rate by biopsy location using a logistic regression model and the difference in median cancer length using the Wilcoxon signed rank test. We identified 396 regions of interest in 286 men. The difference in the detection of high grade cancer between magnetic resonance-ultrasound fusion biopsy and visually targeted biopsy was -1.4% (95% CI -6.4 to 3.6, p=0.6) and for any grade cancer the difference was 3.5% (95% CI -1.9 to 8.9, p=0.2). Median cancer length detected by magnetic resonance-ultrasound fusion and visually targeted biopsy was 5.5 vs 5.8 mm, respectively (p=0.8). Magnetic resonance-ultrasound fusion biopsy detected 15% more cancers in the transition zone (p=0.046) and visually targeted biopsy detected 11% more high grade cancer at the prostate base (p=0.005). Only 52% of all high grade cancers were detected by both techniques. We found no evidence of a significant difference in the detection of high grade or any grade cancer between visually targeted and magnetic resonance-ultrasound fusion biopsy. However, the performance of each technique varied in specific biopsy locations and the outcomes of both techniques were complementary. Combining visually targeted biopsy and magnetic resonance-ultrasound fusion biopsy may optimize the detection of prostate cancer. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. Within-hemifield competition in early visual areas limits the ability to track multiple objects with attention.

    PubMed

    Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick

    2014-08-27

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.

  19. Peripersonal space representation develops independently from visual experience.

    PubMed

    Ricciardi, Emiliano; Menicagli, Dario; Leo, Andrea; Costantini, Marcello; Pietrini, Pietro; Sinigaglia, Corrado

    2017-12-15

    Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects' reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects' reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one's own and others' peripersonal space representation.

  20. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  1. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    PubMed

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  2. Hybrid foraging search: Searching for multiple instances of multiple types of target.

    PubMed

    Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P; Cain, Matthew S

    2016-02-01

    This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Hybrid foraging search: Searching for multiple instances of multiple types of target

    PubMed Central

    Wolfe, Jeremy M.; Aizenman, Avigael M.; Boettcher, Sage E.P.; Cain, Matthew S.

    2016-01-01

    This paper introduces the “hybrid foraging” paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8–64 targets objects in memory. They viewed displays of 60–105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25–33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. PMID:26731644

  4. Foot placement relies on state estimation during visually guided walking.

    PubMed

    Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S

    2017-02-01

    As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.

  5. Prism adaptation does not alter configural processing of faces

    PubMed Central

    Bultitude, Janet H.; Downing, Paul E.; Rafal, Robert D.

    2013-01-01

    Patients with hemispatial neglect (‘neglect’) following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing. PMID:25110574

  6. Prism adaptation does not alter configural processing of faces.

    PubMed

    Bultitude, Janet H; Downing, Paul E; Rafal, Robert D

    2013-01-01

    Patients with hemispatial neglect ('neglect') following a brain lesion show difficulty responding or orienting to objects and events on the left side of space. Substantial evidence supports the use of a sensorimotor training technique called prism adaptation as a treatment for neglect. Reaching for visual targets viewed through prismatic lenses that induce a rightward shift in the visual image results in a leftward recalibration of reaching movements that is accompanied by a reduction of symptoms in patients with neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Interestingly, prism adaptation can also alter aspects of non-lateralised spatial attention. We previously demonstrated that prism adaptation alters the extent to which neglect patients and healthy participants process local features versus global configurations of visual stimuli. Since deficits in non-lateralised spatial attention are thought to contribute to the severity of neglect symptoms, it is possible that the effect of prism adaptation on these deficits contributes to its efficacy. This study examines the pervasiveness of the effects of prism adaptation on perception by examining the effect of prism adaptation on configural face processing using a composite face task. The composite face task is a persuasive demonstration of the automatic global-level processing of faces: the top and bottom halves of two familiar faces form a seemingly new, unknown face when viewed together. Participants identified the top or bottom halves of composite faces before and after prism adaptation. Sensorimotor adaptation was confirmed by significant pointing aftereffect, however there was no significant change in the extent to which the irrelevant face half interfered with processing. The results support the proposal that the therapeutic effects of prism adaptation are limited to dorsal stream processing.

  7. Proof-of-concept of a laser mounted endoscope for touch-less navigated procedures

    PubMed Central

    Kral, Florian; Gueler, Oezguer; Perwoeg, Martina; Bardosi, Zoltan; Puschban, Elisabeth J; Riechelmann, Herbert; Freysinger, Wolfgang

    2013-01-01

    Background and Objectives During navigated procedures a tracked pointing device is used to define target structures in the patient to visualize its position in a registered radiologic data set. When working with endoscopes in minimal invasive procedures, the target region is often difficult to reach and changing instruments is disturbing in a challenging, crucial moment of the procedure. We developed a device for touch less navigation during navigated endoscopic procedures. Materials and Methods A laser beam is delivered to the tip of a tracked endoscope angled to its axis. Thereby the position of the laser spot in the video-endoscopic images changes according to the distance between the tip of the endoscope and the target structure. A mathematical function is defined by a calibration process and is used to calculate the distance between the tip of the endoscope and the target. The tracked tip of the endoscope and the calculated distance is used to visualize the laser spot in the registered radiologic data set. Results In comparison to the tracked instrument, the touch less target definition with the laser spot yielded in an over and above error of 0.12 mm. The overall application error in this experimental setup with a plastic head was 0.61 ± 0.97 mm (95% CI −1.3 to +2.5 mm). Conclusion Integrating a laser in an endoscope and then calculating the distance to a target structure by image processing of the video endoscopic images is accurate. This technology eliminates the need for tracked probes intraoperatively and therefore allows navigation to be integrated seamlessly in clinical routine. However, it is an additional chain link in the sequence of computer-assisted surgery thus influencing the application error. Lasers Surg. Med. 45:377–382, 2013. © 2013 Wiley Periodicals, Inc. PMID:23737122

  8. Who needs a referee? How incorrect basketball actions are automatically detected by basketball players' brain

    PubMed Central

    Proverbio, Alice Mado; Crotti, Nicola; Manfredi, Mirella; Adorni, Roberta; Zani, Alberto

    2012-01-01

    While the existence of a mirror neuron system (MNS) representing and mirroring simple purposeful actions (such as reaching) is known, neural mechanisms underlying the representation of complex actions (such as ballet, fencing, etc.) that are learned by imitation and exercise are not well understood. In this study, correct and incorrect basketball actions were visually presented to professional basketball players and naïve viewers while their EEG was recorded. The participants had to respond to rare targets (unanimated scenes). No category or group differences were found at perceptual level, ruling out the possibility that correct actions might be more visually familiar. Large, anterior N400 responses of event-related brain potentials to incorrectly performed basketball actions were recorded in skilled brains only. The swLORETA inverse solution for incorrect–correct contrast showed that the automatic detection of action ineffectiveness/incorrectness involved the fronto/parietal MNS, the cerebellum, the extra-striate body area, and the superior temporal sulcus. PMID:23181191

  9. Where's Wally: the influence of visual salience on referring expression generation.

    PubMed

    Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah

    2013-01-01

    REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  10. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.

  11. Aging effect on step adjustments and stability control in visually perturbed gait initiation.

    PubMed

    Sun, Ruopeng; Cui, Chuyi; Shea, John B

    2017-10-01

    Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Effects of body lean and visual information on the equilibrium maintenance during stance.

    PubMed

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  13. The course of visual searching to a target in a fixed location: electrophysiological evidence from an emotional flanker task.

    PubMed

    Dong, Guangheng; Yang, Lizhu; Shen, Yue

    2009-08-21

    The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.

  14. Cortical metabolic activity matches the pattern of visual suppression in strabismus.

    PubMed

    Adams, Daniel L; Economides, John R; Sincich, Lawrence C; Horton, Jonathan C

    2013-02-27

    When an eye becomes deviated in early childhood, a person does not experience double vision, although the globes are aimed at different targets. The extra image is prevented from reaching perception in subjects with alternating exotropia by suppression of each eye's peripheral temporal retina. To test the impact of visual suppression on neuronal activity in primary (striate) visual cortex, the pattern of cytochrome oxidase (CO) staining was examined in four macaques raised with exotropia by disinserting the medial rectus muscles shortly following birth. No ocular dominance columns were visible in opercular cortex, where the central visual field is represented, indicating that signals coming from the central retina in each eye were perceived. However, the border strips at the edges of ocular dominance columns appeared pale, reflecting a loss of activity in binocular cells from disruption of fusion. In calcarine cortex, where the peripheral visual field is represented, there were alternating pale and dark bands resembling ocular dominance columns. To interpret the CO staining pattern, [(3)H]proline was injected into the right eye in two monkeys. In the right calcarine cortex, the pale CO columns matched the labeled proline columns of the right eye. In the left calcarine cortex, the pale CO columns overlapped the unlabeled columns of the left eye in the autoradiograph. Therefore, metabolic activity was reduced in the ipsilateral eye's ocular dominance columns which serve peripheral temporal retina, in a fashion consistent with the topographic organization of suppression scotomas in humans with exotropia.

  15. Robot-assisted, ultrasound-guided minimally invasive navigation tool for brachytherapy and ablation therapy: initial assessment

    NASA Astrophysics Data System (ADS)

    Bhattad, Srikanth; Escoto, Abelardo; Malthaner, Richard; Patel, Rajni

    2015-03-01

    Brachytherapy and thermal ablation are relatively new approaches in robot-assisted minimally invasive interventions for treating malignant tumors. Ultrasound remains the most favored choice for imaging feedback, the benefits being cost effectiveness, radiation free, and easy access in an OR. However it does not generally provide high contrast, noise free images. Distortion occurs when the sound waves pass through a medium that contains air and/or when the target organ is deep within the body. The distorted images make it quite difficult to recognize and localize tumors and surgical tools. Often tools, such as a bevel-tipped needle, deflect from its path during insertion, making it difficult to detect the needle tip using a single perspective view. The shifting of the target due to cardiac and/or respiratory motion can add further errors in reaching the target. This paper describes a comprehensive system that uses robot dexterity to capture 2D ultrasound images in various pre-determined modes for generating 3D ultrasound images and assists in maneuvering a surgical tool. An interactive 3D virtual reality environment is developed that visualizes various artifacts present in the surgical site in real-time. The system helps to avoid image distortion by grabbing images from multiple positions and orientation to provide a 3D view. Using the methods developed for this application, an accuracy of 1.3 mm was achieved in target attainment in an in-vivo experiment subjected to tissue motion. An accuracy of 1.36 mm and 0.93 mm respectively was achieved for the ex-vivo experiments with and without external induced motion. An ablation monitor widget that visualizes the changes during the complete ablation process and enables evaluation of the process in its entirety is integrated.

  16. Misperception of exocentric directions in auditory space

    PubMed Central

    Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen

    2008-01-01

    Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205

  17. The contents of visual working memory reduce uncertainty during visual search.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2011-05-01

    Information held in visual working memory (VWM) influences the allocation of attention during visual search, with targets matching the contents of VWM receiving processing benefits over those that do not. Such an effect could arise from multiple mechanisms: First, it is possible that the contents of working memory enhance the perceptual representation of the target. Alternatively, it is possible that when a target is presented among distractor items, the contents of working memory operate postperceptually to reduce uncertainty about the location of the target. In both cases, a match between the contents of VWM and the target should lead to facilitated processing. However, each effect makes distinct predictions regarding set-size manipulations; whereas perceptual enhancement accounts predict processing benefits regardless of set size, uncertainty reduction accounts predict benefits only with set sizes larger than 1, when there is uncertainty regarding the target location. In the present study, in which briefly presented, masked targets were presented in isolation, there was a negligible effect of the information held in VWM on target discrimination. However, in displays containing multiple masked items, information held in VWM strongly affected target discrimination. These results argue that working memory representations act at a postperceptual level to reduce uncertainty during visual search.

  18. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  19. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  20. Visual and linguistic determinants of the eyes' initial fixation position in reading development.

    PubMed

    Ducrot, Stéphanie; Pynte, Joël; Ghio, Alain; Lété, Bernard

    2013-03-01

    Two eye-movement experiments with one hundred and seven first- through fifth-grade children were conducted to examine the effects of visuomotor and linguistic factors on the recognition of words and pseudowords presented in central vision (using a variable-viewing-position technique) and in parafoveal vision (shifted to the left or right of a central fixation point). For all groups of children, we found a strong effect of stimulus location, in both central and parafoveal vision. This effect corresponds to the children's apparent tendency, for peripherally located targets, to reach a position located halfway between the middle and the left edge of the stimulus (preferred viewing location, PVL), whether saccading to the right or left. For centrally presented targets, refixation probability and lexical-decision time were the lowest near the word's center, suggesting an optimal viewing position (OVP). The viewing-position effects found here were modulated (1) by print exposure, both in central and parafoveal vision; and (2) by the intrinsic qualities of the stimulus (lexicality and word frequency) for targets in central vision but not for parafoveally presented targets. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Measurement of drug-target engagement in live cells by two-photon fluorescence anisotropy imaging.

    PubMed

    Vinegoni, Claudio; Fumene Feruglio, Paolo; Brand, Christian; Lee, Sungon; Nibbs, Antoinette E; Stapleton, Shawn; Shah, Sunil; Gryczynski, Ignacy; Reiner, Thomas; Mazitschek, Ralph; Weissleder, Ralph

    2017-07-01

    The ability to directly image and quantify drug-target engagement and drug distribution with subcellular resolution in live cells and whole organisms is a prerequisite to establishing accurate models of the kinetics and dynamics of drug action. Such methods would thus have far-reaching applications in drug development and molecular pharmacology. We recently presented one such technique based on fluorescence anisotropy, a spectroscopic method based on polarization light analysis and capable of measuring the binding interaction between molecules. Our technique allows the direct characterization of target engagement of fluorescently labeled drugs, using fluorophores with a fluorescence lifetime larger than the rotational correlation of the bound complex. Here we describe an optimized protocol for simultaneous dual-channel two-photon fluorescence anisotropy microscopy acquisition to perform drug-target measurements. We also provide the necessary software to implement stream processing to visualize images and to calculate quantitative parameters. The assembly and characterization part of the protocol can be implemented in 1 d. Sample preparation, characterization and imaging of drug binding can be completed in 2 d. Although currently adapted to an Olympus FV1000MPE microscope, the protocol can be extended to other commercial or custom-built microscopes.

  2. Coherent Amplification of Ultrafast Molecular Dynamics in an Optical Oscillator

    NASA Astrophysics Data System (ADS)

    Aharonovich, Igal; Pe'er, Avi

    2016-02-01

    Optical oscillators present a powerful optimization mechanism. The inherent competition for the gain resources between possible modes of oscillation entails the prevalence of the most efficient single mode. We harness this "ultrafast" coherent feedback to optimize an optical field in time, and show that, when an optical oscillator based on a molecular gain medium is synchronously pumped by ultrashort pulses, a temporally coherent multimode field can develop that optimally dumps a general, dynamically evolving vibrational wave packet, into a single vibrational target state. Measuring the emitted field opens a new window to visualization and control of fast molecular dynamics. The realization of such a coherent oscillator with hot alkali dimers appears within experimental reach.

  3. Photoacoustic imaging velocimetry for flow-field measurement.

    PubMed

    Ma, Songbo; Yang, Sihua; Xing, Da

    2010-05-10

    We present the photoacoustic imaging velocimetry (PAIV) method for flow-field measurement based on a linear transducer array. The PAIV method is realized by using a Q-switched pulsed laser, a linear transducer array, a parallel data-acquisition equipment and dynamic focusing reconstruction. Tracers used to track liquid flow field were real-timely detected, two-dimensional (2-D) flow visualization was successfully reached, and flow parameters were acquired by measuring the movement of the tracer. Experimental results revealed that the PAIV method would be developed into 3-D imaging velocimetry for flow-field measurement, and potentially applied to research the security and targeting efficiency of optical nano-material probes. (c) 2010 Optical Society of America.

  4. Voluntary orienting among children and adolescents with Down syndrome and MA-matched typically developing children.

    PubMed

    Goldman, Karen J; Flanagan, Tara; Shulman, Cory; Enns, James T; Burack, Jacob A

    2005-05-01

    A forced-choice reaction-time (RT) task was used to examine voluntary visual orienting among children and adolescents with trisomy 21 Down syndrome and typically developing children matched at an MA of approximately 5.6 years, an age when the development of orienting abilities reaches optimal adult-like efficiency. Both groups displayed faster reaction times (RTs) when the target location was cued correctly than when cued incorrectly under both short and long SOA conditions, indicating intact orienting among children with Down syndrome. This finding is further evidence that the efficiency of many of the primary components of attention among persons with Down syndrome is consistent with their developmental level.

  5. Poor shape perception is the reason reaches-to-grasp are visually guided online.

    PubMed

    Lee, Young-Lim; Crabtree, Charles E; Norman, J Farley; Bingham, Geoffrey P

    2008-08-01

    Both judgment studies and studies of feedforward reaching have shown that the visual perception of object distance, size, and shape are inaccurate. However, feedback has been shown to calibrate feedfoward reaches-to-grasp to make them accurate with respect to object distance and size. We now investigate whether shape perception (in particular, the aspect ratio of object depth to width) can be calibrated in the context of reaches-to-grasp. We used cylindrical objects with elliptical cross-sections of varying eccentricity. Our participants reached to grasp the width or the depth of these objects with the index finger and thumb. The maximum grasp aperture and the terminal grasp aperture were used to evaluate perception. Both occur before the hand has contacted an object. In Experiments 1 and 2, we investigated whether perceived shape is recalibrated by distorted haptic feedback. Although somewhat equivocal, the results suggest that it is not. In Experiment 3, we tested the accuracy of feedforward grasping with respect to shape with haptic feedback to allow calibration. Grasping was inaccurate in ways comparable to findings in shape perception judgment studies. In Experiment 4, we hypothesized that online guidance is needed for accurate grasping. Participants reached to grasp either with or without vision of the hand. The result was that the former was accurate, whereas the latter was not. We conclude that shape perception is not calibrated by feedback from reaches-to-grasp and that online visual guidance is required for accurate grasping because shape perception is poor.

  6. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  7. Imagined Actions Aren't Just Weak Actions: Task Variability Promotes Skill Learning in Physical Practice but Not in Mental Practice

    ERIC Educational Resources Information Center

    Coelho, Chase J.; Nusbaum, Howard C.; Rosenbaum, David A.; Fenn, Kimberly M.

    2012-01-01

    Early research on visual imagery led investigators to suggest that mental visual images are just weak versions of visual percepts. Later research helped investigators understand that mental visual images differ in deeper and more subtle ways from visual percepts. Research on motor imagery has yet to reach this mature state, however. Many authors…

  8. Search time critically depends on irrelevant subset size in visual search.

    PubMed

    Benjamins, Jeroen S; Hooge, Ignace T C; van Elst, Jacco C; Wertheim, Alexander H; Verstraten, Frans A J

    2009-02-01

    In order for our visual system to deal with the massive amount of sensory input, some of this input is discarded, while other parts are processed [Wolfe, J. M. (1994). Guided search 2.0: a revised model of visual search. Psychonomic Bulletin and Review, 1, 202-238]. From the visual search literature it is unclear how well one set of items can be selected that differs in only one feature from target (a 1F set), while another set of items can be ignored that differs in two features from target (a 2F set). We systematically varied the percentage of 2F non-targets to determine the contribution of these non-targets to search behaviour. Increasing the percentage 2F non-targets, that have to be ignored, was expected to result in increasingly faster search, since it decreases the size of 1F set that has to be searched. Observers searched large displays for a target in the 1F set with a variable percentage of 2F non-targets. Interestingly, when the search displays contained 5% 2F non-targets, the search time was longer compared to the search time in other conditions. This effect of 2F non-targets on performance was independent of set size. An inspection of the saccades revealed that saccade target selection did not contribute to the longer search times in displays with 5% 2F non-targets. Occurrence of longer search times in displays containing 5% 2F non-targets might be attributed to covert processes related to visual analysis of the fixated part of the display. Apparently, visual search performance critically depends on the percentage of irrelevant 2F non-targets.

  9. Error amplification to promote motor learning and motivation in therapy robotics.

    PubMed

    Shirzad, Navid; Van der Loos, H F Machiel

    2012-01-01

    To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.

  10. REACH: Real-Time Data Awareness in Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Maks, Lori; Coleman, Jason; Obenschain, Arthur F. (Technical Monitor)

    2002-01-01

    Missions have been proposed that will use multiple spacecraft to perform scientific or commercial tasks. Indeed, in the commercial world, some spacecraft constellations already exist. Aside from the technical challenges of constructing and flying these missions, there is also the financial challenge presented by the tradition model of the flight operations team (FOT) when it is applied to a constellation mission. Proposed constellation missions range in size from three spacecraft to more than 50. If the current ratio of three-to-five FOT personnel per spacecraft is maintained, the size of the FOT becomes cost prohibitive. The Advanced Architectures and Automation Branch at the Goddard Space Flight Center (GSFC Code 588) saw the potential to reduce the cost of these missions by creating new user interfaces to the ground system health-and-safety data. The goal is to enable a smaller FOT to remain aware and responsive to the increased amount of ground system information in a multi-spacecraft environment. Rather than abandon the tried and true, these interfaces were developed to run alongside existing ground system software to provide additional support to the FOT. These new user interfaces have been combined in a tool called REACH. REACH-the Real-time Evaluation and Analysis of Consolidated Health-is a software product that uses advanced visualization techniques to make spacecraft anomalies easy to spot, no matter how many spacecraft are in the constellation. REACH reads a real-time stream of data from the ground system and displays it to the FOT such that anomalies are easy to pick out and investigate. Data visualization has been used in ground system operations for many years. To provide a unique visualization tool, we developed a unique source of data to visualize: the REACH Health Model Engine. The Health Model Engine is rule-based software that receives real-time telemetry information and outputs "health" information related to the subsystems and spacecraft that the telemetry belong to. The Health Engine can run out-of-the-box or can be tailored with a scripting language. Out of the box, it uses limit violations to determine the health of subsystems and spacecraft; when tailored, it determines health using equations combining the values and limits of any telemetry in the spacecraft. The REACH visualizations then "roll up" the information from the Health Engine into high level, summary displays. These summary visualizations can be "zoomed" into for increasing levels of detail. Currently REACH is installed in the Small Explorer (SMEX) lab at GSFC, and is monitoring three of their five spacecraft. We are scheduled to install REACH in the Mid-sized Explorer (MIDEX) lab, which will allow us to monitor up to six more spacecraft. The process of installing and using our "research" software in an operational environment has provided many insights into which parts of REACH are a step forward and which of our ideas are missteps. Our paper explores both the new concepts in spacecraft health-and-safety visualization, the difficulties of such systems in the operational environment, and the cost and safety issues of multi-spacecraft missions.

  11. The effects of visual control whole body vibration exercise on balance and gait function of stroke patients.

    PubMed

    Choi, Eon-Tak; Kim, Yong-Nam; Cho, Woon-Soo; Lee, Dong-Kyu

    2016-11-01

    [Purpose] This study aims to verify the effects of visual control whole body vibration exercise on balance and gait function of stroke patients. [Subjects and Methods] A total of 22 stroke patients were randomly assigned to two groups; 11 to the experimental group and 11 to the control group. Both groups received 30 minutes of Neuro-developmental treatment 5 times per week for 4 weeks. The experimental group additionally performed 10 minutes of visual control whole body vibration exercise 5 times per week during the 4 weeks. Balance was measured using the Functional Reach Test. Gait was measured using the Timed Up and Go Test. [Results] An in-group comparison in the experimental group showed significant differences in the Functional Reach Test and Timed Up and Go Test. In comparing the groups, the Functional Reach Test and Timed Up and Go Test of the experimental group were more significantly different compared to the control group. [Conclusion] These results suggest that visual control whole body vibration exercise has a positive effect on the balance and gait function of stroke patients.

  12. Examining perceptual and conceptual set biases in multiple-target visual search.

    PubMed

    Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R

    2015-04-01

    Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers.

  13. Visual gravitational motion and the vestibular system in humans

    PubMed Central

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-01-01

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761

  14. Visual gravitational motion and the vestibular system in humans.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  15. Low Target Prevalence Is a Stubborn Source of Errors in Visual Search Tasks

    ERIC Educational Resources Information Center

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2007-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1%-2%) than at high prevalence (50%). Unfortunately, low prevalence is…

  16. Memory for found targets interferes with subsequent performance in multiple-target visual search.

    PubMed

    Cain, Matthew S; Mitroff, Stephen R

    2013-10-01

    Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  17. Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.

  18. Optical images of visible and invisible percepts in the primary visual cortex of primates

    PubMed Central

    Macknik, Stephen L.; Haglund, Michael M.

    1999-01-01

    We optically imaged a visual masking illusion in primary visual cortex (area V-1) of rhesus monkeys to ask whether activity in the early visual system more closely reflects the physical stimulus or the generated percept. Visual illusions can be a powerful way to address this question because they have the benefit of dissociating the stimulus from perception. We used an illusion in which a flickering target (a bar oriented in visual space) is rendered invisible by two counter-phase flickering bars, called masks, which flank and abut the target. The target and masks, when shown separately, each generated correlated activity on the surface of the cortex. During the illusory condition, however, optical signals generated in the cortex by the target disappeared although the image of the masks persisted. The optical image thus was correlated with perception but not with the physical stimulus. PMID:10611363

  19. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  20. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  1. Implicit Object Naming in Visual Search: Evidence from Phonological Competition

    PubMed Central

    Walenchok, Stephen C.; Hout, Michael C.; Goldinger, Stephen D.

    2016-01-01

    During visual search, people are distracted by objects that visually resemble search targets; search is impaired when targets and distractors share overlapping features. In this study, we examined whether a nonvisual form of similarity, overlapping object names, can also affect search performance. In three experiments, people searched for images of real-world objects (e.g., a beetle) among items whose names either all shared the same phonological onset (/bi/), or were phonologically varied. Participants either searched for one or three potential targets per trial, with search targets designated either visually or verbally. We examined standard visual search (Experiments 1 and 3) and a self-paced serial search task wherein participants manually rejected each distractor (Experiment 2). We hypothesized that people would maintain visual templates when searching for single targets, but would rely more on object names when searching for multiple items and when targets were verbally cued. This reliance on target names would make performance susceptible to interference from similar-sounding distractors. Experiments 1 and 2 showed the predicted interference effect in conditions with high memory load and verbal cues. In Experiment 3, eye-movement results showed that phonological interference resulted from small increases in dwell time to all distractors. The results suggest that distractor names are implicitly activated during search, slowing attention disengagement when targets and distractors share similar names. PMID:27531018

  2. The iconic memory skills of brain injury survivors and non-brain injured controls after visual scanning training.

    PubMed

    McClure, J T; Browning, R T; Vantrease, C M; Bittle, S T

    1994-01-01

    Previous research suggests that traumatic brain injury (TBI) results in impairment of iconic memory abilities.We would like to acknowledge the contribution of Jeffrey D. Vantrease, who wrote the software program for the Iconic Memory procedure and measurement. This raises serious implications for brain injury rehabilitation. Most cognitive rehabilitation programs do not include iconic memory training. Instead it is common for cognitive rehabilitation programs to focus on attention and concentration skills, memory skills, and visual scanning skills.This study compared the iconic memory skills of brain-injury survivors and control subjects who all reached criterion levels of visual scanning skills. This involved previous training for the brain-injury survivors using popular visual scanning programs that allowed them to visually scan with response time and accuracy within normal limits. Control subjects required only minimal training to reach normal limits criteria. This comparison allows for the dissociation of visual scanning skills and iconic memory skills.The results are discussed in terms of their implications for cognitive rehabilitation and the relationship between visual scanning training and iconic memory skills.

  3. Responses to Targets in the Visual Periphery in Deaf and Normal-Hearing Adults

    ERIC Educational Resources Information Center

    Rothpletz, Ann M.; Ashmead, Daniel H.; Tharpe, Anne Marie

    2003-01-01

    The purpose of this study was to compare the response times of deaf and normal-hearing individuals to the onset of target events in the visual periphery in distracting and nondistracting conditions. Visual reaction times to peripheral targets placed at 3 eccentricities to the left and right of a center fixation point were measured in prelingually…

  4. Will the European Union reach the United Nations Millennium declaration target of a 50% reduction of tuberculosis mortality between 1990 and 2015?

    PubMed

    van der Werf, Marieke J; Bonfigli, Sandro; Hruba, Frantiska

    2017-07-06

    The Millennium Development Goals (MDG) provide targets for 2015. MDG 6 includes a target to reduce the tuberculosis (TB) death rate by 50% compared with 1990. We aimed to assess whether this target was reached by the European Union (EU) and European Economic Area countries. We used Eurostat causes of death data to assess whether the target was reached in the EU. We calculated the reduction in reported and adjusted death rates and the annual average percentage decline based on the available data. Between 1999 and 2014, the TB death rate decreased by 50%, the adjusted death rate by 56% and the annual average percentage decline was 5.43% (95% confidence interval 4.94-6.74) for the EU. Twenty of 26 countries reporting >5 TB deaths in the first reporting year reached the target of 50% reduction in adjusted death rate. The EU reached the MDG target of a 50% reduction of the TB death rate and also the annual average percentage decline was larger than the 2.73% needed to reach the target. The World Health Organization 'End TB Strategy' requires a further reduction of the number of TB deaths of 35% by 2020 compared to 2015, which will challenge TB prevention and care services in the EU.

  5. Top-down contextual knowledge guides visual attention in infancy.

    PubMed

    Tummeltshammer, Kristen; Amso, Dima

    2017-10-26

    The visual context in which an object or face resides can provide useful top-down information for guiding attention orienting, object recognition, and visual search. Although infants have demonstrated sensitivity to covariation in spatial arrays, it is presently unclear whether they can use rapidly acquired contextual knowledge to guide attention during visual search. In this eye-tracking experiment, 6- and 10-month-old infants searched for a target face hidden among colorful distracter shapes. Targets appeared in Old or New visual contexts, depending on whether the visual search arrays (defined by the spatial configuration, shape and color of component items in the search display) were repeated or newly generated throughout the experiment. Targets in Old contexts appeared in the same location within the same configuration, such that context covaried with target location. Both 6- and 10-month-olds successfully distinguished between Old and New contexts, exhibiting faster search times, fewer looks at distracters, and more anticipation of targets when contexts repeated. This initial demonstration of contextual cueing effects in infants indicates that they can use top-down information to facilitate orienting during memory-guided visual search. © 2017 John Wiley & Sons Ltd.

  6. A framework for small infrared target real-time visual enhancement

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin

    2015-03-01

    This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.

  7. Asymmetries in visual search for conjunctive targets.

    PubMed

    Cohen, A

    1993-08-01

    Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.

  8. Visuo-vestibular interaction: predicting the position of a visual target during passive body rotation.

    PubMed

    Mackrous, I; Simoneau, M

    2011-11-10

    Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2016-01-01

    The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.

  10. The challenge of reducing scientific complexity for different target groups (without losing the essence) - experiences from interdisciplinary audio-visual media production

    NASA Astrophysics Data System (ADS)

    Hezel, Bernd; Broschkowski, Ephraim; Kropp, Jürgen

    2013-04-01

    The Climate Media Factory originates from an interdisciplinary media lab run by the Film and Television University "Konrad Wolf" Potsdam-Babelsberg (HFF) and the Potsdam Institute for Climate Impact Research (PIK). Climate scientists, authors, producers and media scholars work together to develop media products on climate change and sustainability. We strive towards communicating scientific content via different media platforms reconciling the communication needs of scientists and the audience's need to understand the complexity of topics that are relevant in their everyday life. By presenting four audio-visual examples, that have been designed for very different target groups, we show (i) the interdisciplinary challenges during the production process and the lessons learnt and (ii) possibilities to reach the required degree of simplification without the need for dumbing down the content. "We know enough about climate change" is a short animated film that was produced for the German Agency for International Cooperation (GIZ) for training programs and conferences on adaptation in the target countries including Indonesia, Tunisia and Mexico. "Earthbook" is a short animation produced for "The Year of Science" to raise awareness for the topics of sustainability among digital natives. "What is Climate Engineering?". Produced for the Institute for Advanced Sustainability Studies (IASS) the film is meant for an informed and interested public. "Wimmelwelt Energie!" is a prototype of an iPad application for children from 4-6 years of age to help them learn about different forms of energy and related greenhouse gas emissions.

  11. Postdictive modulation of visual orientation.

    PubMed

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  12. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  13. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  14. Capturing Visual Metaphors and Tales: Innovative or Elusive?

    ERIC Educational Resources Information Center

    Elliot, Dely Lazarte; Reid, Kate; Baumfield, Vivienne

    2017-01-01

    Despite the exponential growth of visual research in the social sciences in the last three decades, continuing empirical enquiries are arguably more relevant than ever. Earlier research employed visual methods primarily to investigate distinct cultural practices, often seeking the views of marginalized, challenging or hard-to-reach participants.…

  15. Visual Search in Typically Developing Toddlers and Toddlers with Fragile X or Williams Syndrome

    ERIC Educational Resources Information Center

    Scerif, Gaia; Cornish, Kim; Wilding, John; Driver, Jon; Karmiloff-Smith, Annette

    2004-01-01

    Visual selective attention is the ability to attend to relevant visual information and ignore irrelevant stimuli. Little is known about its typical and atypical development in early childhood. Experiment 1 investigates typically developing toddlers' visual search for multiple targets on a touch-screen. Time to hit a target, distance between…

  16. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  17. Discrimination of curvature from motion during smooth pursuit eye movements and fixation.

    PubMed

    Ross, Nicholas M; Goettker, Alexander; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2017-09-01

    Smooth pursuit and motion perception have mainly been investigated with stimuli moving along linear trajectories. Here we studied the quality of pursuit movements to curved motion trajectories in human observers and examined whether the pursuit responses would be sensitive enough to discriminate various degrees of curvature. In a two-interval forced-choice task subjects pursued a Gaussian blob moving along a curved trajectory and then indicated in which interval the curve was flatter. We also measured discrimination thresholds for the same curvatures during fixation. Motion curvature had some specific effects on smooth pursuit properties: trajectories with larger amounts of curvature elicited lower open-loop acceleration, lower pursuit gain, and larger catch-up saccades compared with less curved trajectories. Initially, target motion curvatures were underestimated; however, ∼300 ms after pursuit onset pursuit responses closely matched the actual curved trajectory. We calculated perceptual thresholds for curvature discrimination, which were on the order of 1.5 degrees of visual angle (°) for a 7.9° curvature standard. Oculometric sensitivity to curvature discrimination based on the whole pursuit trajectory was quite similar to perceptual performance. Oculometric thresholds based on smaller time windows were higher. Thus smooth pursuit can quite accurately follow moving targets with curved trajectories, but temporal integration over longer periods is necessary to reach perceptual thresholds for curvature discrimination. NEW & NOTEWORTHY Even though motion trajectories in the real world are frequently curved, most studies of smooth pursuit and motion perception have investigated linear motion. We show that pursuit initially underestimates the curvature of target motion and is able to reproduce the target curvature ∼300 ms after pursuit onset. Temporal integration of target motion over longer periods is necessary for pursuit to reach the level of precision found in perceptual discrimination of curvature. Copyright © 2017 the American Physiological Society.

  18. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    ERIC Educational Resources Information Center

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  19. Air-To-Air Visual Target Acquisition Pilot Interview Survey.

    DTIC Science & Technology

    1979-01-01

    8217top’ 5 p~lots in air-tu-air visual target acqui- sition in your squadron," would/could you do it? yes no Comment : 2. Is the term "acquisition" as...meaningful as "spotting" and "seeing" in 1he con- text of visually detecting a "bogey" or another aircraft? yes no Comment : 3. Would/could you rank all...squadron pilots on the basis of their visual target acquisition capability? yes no Comment : 4. Is there a minimum number of observations requi.red for

  20. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    PubMed

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  1. Effect of visual target blurring on accommodation under distance viewing

    NASA Astrophysics Data System (ADS)

    Iwata, Yo; Handa, Tomoya; Ishikawa, Hitoshi

    2018-04-01

    Abstract Purpose To examine the effect of visual target blurring on accommodation. Methods We evaluated the objective refraction values when the visual target (asterisk; 8°) was changed from the state without Gaussian blur (15 s) to the state with Gaussian blur adapted [0(without blur) → 10, 0 → 50, 0 → 100: 15 s each]. Results In Gaussian blur 10, when blurring of the target occurred, refraction value did not change significantly. In Gaussian blur 50 and 100, when blurring of the target occurred, the refraction value became significantly myopic. Conclusion Blurring of the distant visual target results in intervention of accommodation.

  2. Light Video Game Play is Associated with Enhanced Visual Processing of Rapid Serial Visual Presentation Targets.

    PubMed

    Howard, Christina J; Wilding, Robert; Guest, Duncan

    2017-02-01

    There is mixed evidence that video game players (VGPs) may demonstrate better performance in perceptual and attentional tasks than non-VGPs (NVGPs). The rapid serial visual presentation task is one such case, where observers respond to two successive targets embedded within a stream of serially presented items. We tested light VGPs (LVGPs) and NVGPs on this task. LVGPs were better at correct identification of second targets whether they were also attempting to respond to the first target. This performance benefit seen for LVGPs suggests enhanced visual processing for briefly presented stimuli even with only very moderate game play. Observers were less accurate at discriminating the orientation of a second target within the stream if it occurred shortly after presentation of the first target, that is to say, they were subject to the attentional blink (AB). We find no evidence for any reduction in AB in LVGPs compared with NVGPs.

  3. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search

    PubMed Central

    Hout, Michael C.; Goldinger, Stephen D.

    2014-01-01

    When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306

  4. Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.

    PubMed

    Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico

    2014-01-01

    Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.

  5. A stereotaxic method of recording from single neurons in the intact in vivo eye of the cat.

    PubMed

    Molenaar, J; Van de Grind, W A

    1980-04-01

    A method is described for recording stereotaxically from single retinal neurons in the optically intact in vivo eye of the cat. The method is implemented with the help of a new type of stereotaxic instrument and a specially developed stereotaxic atlas of the cat's eye and retina. The instrument is extremely stable and facilitates intracellular recording from retinal neurons. The microelectrode can be rotated about two mutually perpendicular axes, which intersect in the freely positionable pivot point of the electrode manipulation system. When the pivot point is made to coincide with a small electrode-entrance hole in the sclera of the eye, a large retinal region can be reached through this fixed hole in the immobilized eye. The stereotaxic method makes it possible to choose a target point on the presented eye atlas and predict the settings of the instrument necessary to reach this target. This method also includes the prediction of the corresponding light stimulus position on a tangent screen and the calculation of the projection of the recording electrode on this screen. The sources of error in the method were studied experimentally and a numerical perturbation analysis was carried out to study the influence of each of the sources of error on the final result. The overall accuracy of the method is of the order of 5 degrees of visual angle, which will be sufficient for most purposes.

  6. Faster but Less Careful Prehension in Presence of High, Rather than Low, Social Status Attendees

    PubMed Central

    Rigutti, Sara; Piccoli, Valentina; Sommacal, Elena; Carnaghi, Andrea

    2016-01-01

    Ample evidence attests that social intention, elicited through gestures explicitly signaling a request of communicative intention, affects the patterning of hand movement kinematics. The current study goes beyond the effect of social intention and addresses whether the same action of reaching to grasp an object for placing it in an end target position within or without a monitoring attendee’s peripersonal space, can be moulded by pure social factors in general, and by social facilitation in particular. A motion tracking system (Optotrak Certus) was used to record motor acts. We carefully avoided the usage of communicative intention by keeping constant both the visual information and the positional uncertainty of the end target position, while we systematically varied the social status of the attendee (a high, or a low social status) in separated blocks. Only thirty acts performed in the presence of a different social status attendee, revealed a significant change of kinematic parameterization of hand movement, independently of the attendee's distance. The amplitude of peak velocity reached by the hand during the reach-to-grasp and the lift-to-place phase of the movement was larger in the high rather than in the low social status condition. By contrast, the deceleration time of the reach-to-grasp phase and the maximum grasp aperture was smaller in the high rather than in the low social status condition. These results indicated that the hand movement was faster but less carefully shaped in presence of a high, but not of a low social status attendee. This kinematic patterning suggests that being monitored by a high rather than a low social status attendee might lead participants to experience evaluation apprehension that informs the control of motor execution. Motor execution would rely more on feedforward motor control in the presence of a high social status human attendee, vs. feedback motor control, in the presence of a low social status attendee. PMID:27351978

  7. Faster but Less Careful Prehension in Presence of High, Rather than Low, Social Status Attendees.

    PubMed

    Fantoni, Carlo; Rigutti, Sara; Piccoli, Valentina; Sommacal, Elena; Carnaghi, Andrea

    2016-01-01

    Ample evidence attests that social intention, elicited through gestures explicitly signaling a request of communicative intention, affects the patterning of hand movement kinematics. The current study goes beyond the effect of social intention and addresses whether the same action of reaching to grasp an object for placing it in an end target position within or without a monitoring attendee's peripersonal space, can be moulded by pure social factors in general, and by social facilitation in particular. A motion tracking system (Optotrak Certus) was used to record motor acts. We carefully avoided the usage of communicative intention by keeping constant both the visual information and the positional uncertainty of the end target position, while we systematically varied the social status of the attendee (a high, or a low social status) in separated blocks. Only thirty acts performed in the presence of a different social status attendee, revealed a significant change of kinematic parameterization of hand movement, independently of the attendee's distance. The amplitude of peak velocity reached by the hand during the reach-to-grasp and the lift-to-place phase of the movement was larger in the high rather than in the low social status condition. By contrast, the deceleration time of the reach-to-grasp phase and the maximum grasp aperture was smaller in the high rather than in the low social status condition. These results indicated that the hand movement was faster but less carefully shaped in presence of a high, but not of a low social status attendee. This kinematic patterning suggests that being monitored by a high rather than a low social status attendee might lead participants to experience evaluation apprehension that informs the control of motor execution. Motor execution would rely more on feedforward motor control in the presence of a high social status human attendee, vs. feedback motor control, in the presence of a low social status attendee.

  8. [Social marketing to increase the rate of cataract surgery in the Sava region of Madagascar].

    PubMed

    Nkumbe, H E; Razafinimpanana, N; Rakotondrajoa, L P

    2013-01-01

    Lack of information is one of the main reasons why people who are visually impaired or blind as a result of cataracts do not visit eye care centers for surgery that can restore their sight. This study was conducted to determine the best ways to inform the main target groups about the possibility of restoring sight to those whose visual impairment and blindness is due to cataracts and about outreach visits by the mobile eye clinic of FLM SALFA, Sambava, in the Sava region of Madagascar from November 2008 through October 2009. Two community eye health workers conducted awareness campaigns and delivered posters to radio stations, religious leaders, and administrative authorities of the 17 most populated municipalities in the region of Sava, two weeks before these visits. All participants who visited the mobile clinic were interviewed, and the ophthalmologist's diagnosis was noted on the questionnaire. Women accounted for 51.5% of the 955 participants. Radio was the most effective means of communication in the region overall, and specifically for reaching men (P=0.044); churches were more successful for reaching women (P = 0.000). Cataract was diagnosed in 16.2% of men and 8.1% of women (p = 0.0001). To significantly increase the number of people, especially women, having cataract surgery in the Sava region, it is essential to work closely with the leaders of all religious groups, as well as with radio stations.

  9. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    PubMed

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  10. The Effects of Spatial Endogenous Pre-cueing across Eccentricities

    PubMed Central

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353

  11. The Effects of Spatial Endogenous Pre-cueing across Eccentricities.

    PubMed

    Feng, Jing; Spence, Ian

    2017-01-01

    Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.

  12. MutScan: fast detection and visualization of target mutations by scanning FASTQ data.

    PubMed

    Chen, Shifu; Huang, Tanxiao; Wen, Tiexiang; Li, Hong; Xu, Mingyan; Gu, Jia

    2018-01-22

    Some types of clinical genetic tests, such as cancer testing using circulating tumor DNA (ctDNA), require sensitive detection of known target mutations. However, conventional next-generation sequencing (NGS) data analysis pipelines typically involve different steps of filtering, which may cause miss-detection of key mutations with low frequencies. Variant validation is also indicated for key mutations detected by bioinformatics pipelines. Typically, this process can be executed using alignment visualization tools such as IGV or GenomeBrowse. However, these tools are too heavy and therefore unsuitable for validating mutations in ultra-deep sequencing data. We developed MutScan to address problems of sensitive detection and efficient validation for target mutations. MutScan involves highly optimized string-searching algorithms, which can scan input FASTQ files to grab all reads that support target mutations. The collected supporting reads for each target mutation will be piled up and visualized using web technologies such as HTML and JavaScript. Algorithms such as rolling hash and bloom filter are applied to accelerate scanning and make MutScan applicable to detect or visualize target mutations in a very fast way. MutScan is a tool for the detection and visualization of target mutations by only scanning FASTQ raw data directly. Compared to conventional pipelines, this offers a very high performance, executing about 20 times faster, and offering maximal sensitivity since it can grab mutations with even one single supporting read. MutScan visualizes detected mutations by generating interactive pile-ups using web technologies. These can serve to validate target mutations, thus avoiding false positives. Furthermore, MutScan can visualize all mutation records in a VCF file to HTML pages for cloud-friendly VCF validation. MutScan is an open source tool available at GitHub: https://github.com/OpenGene/MutScan.

  13. Effects of Alzheimer’s Disease on Visual Target Detection: A “Peripheral Bias”

    PubMed Central

    Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A.; Feuerstein, Flurin; Gruber, Nicole; Müri, René M.; Mosimann, Urs P.; Nef, Tobias

    2016-01-01

    Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer’s Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view. PMID:27582704

  14. Effects of Alzheimer's Disease on Visual Target Detection: A "Peripheral Bias".

    PubMed

    Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A; Feuerstein, Flurin; Gruber, Nicole; Müri, René M; Mosimann, Urs P; Nef, Tobias

    2016-01-01

    Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer's Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view.

  15. Visualizing Trumps Vision in Training Attention.

    PubMed

    Reinhart, Robert M G; McClenahan, Laura J; Woodman, Geoffrey F

    2015-07-01

    Mental imagery can have powerful training effects on behavior, but how this occurs is not well understood. Here we show that even a single instance of mental imagery can improve attentional selection of a target more effectively than actually practicing visual search. By recording subjects' brain activity, we found that these imagery-induced training effects were due to perceptual attention being more effectively focused on targets following imagined training. Next, we examined the downside of this potent training by changing the target after several trials of training attention with imagery and found that imagined search resulted in more potent interference than actual practice following these target changes. Finally, we found that proactive interference from task-irrelevant elements in the visual displays appears to underlie the superiority of imagined training relative to actual practice. Our findings demonstrate that visual attention mechanisms can be effectively trained to select target objects in the absence of visual input, and this results in more effective control of attention than practicing the task itself. © The Author(s) 2015.

  16. Progress in the Visualization and Mining of Chemical and Target Spaces.

    PubMed

    Medina-Franco, José L; Aguayo-Ortiz, Rodrigo

    2013-12-01

    Chemogenomics is a growing field that aims to integrate the chemical and target spaces. As part of a multi-disciplinary effort to achieve this goal, computational methods initially developed to visualize the chemical space of compound collections and mine single-target structure-activity relationships, are being adapted to visualize and mine complex relationships in chemogenomics data sets. Similarly, the growing evidence that clinical effects are many times due to the interaction of single or multiple drugs with multiple targets, is encouraging the development of novel methodologies that are integrated in multi-target drug discovery endeavors. Herein we review advances in the development and application of approaches to generate visual representations of chemical space with particular emphasis on methods that aim to explore and uncover relationships between chemical and target spaces. Also, progress in the data mining of the structure-activity relationships of sets of compounds screened across multiple targets are discussed in light of the concept of activity landscape modeling. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  18. Social Beliefs and Visual Attention: How the Social Relevance of a Cue Influences Spatial Orienting.

    PubMed

    Gobel, Matthias S; Tufft, Miles R A; Richardson, Daniel C

    2018-05-01

    We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue-a hand or an eye-or due to its social relevance-a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer (Experiment 1), and that their partner had higher or lower social rank while engaged in the same or a different task (Experiment 2). We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue-whether the cue is connected to another person, who this person is, and what this person is doing-and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  19. Spatiotemporal gait changes with use of an arm swing cueing device in people with Parkinson's disease.

    PubMed

    Thompson, Elizabeth; Agada, Peter; Wright, W Geoffrey; Reimann, Hendrik; Jeka, John

    2017-10-01

    Impaired arm swing is a common motor symptom of Parkinson's disease (PD), and correlates with other gait impairments and increased risk of falls. Studies suggest that arm swing is not merely a passive consequence of trunk rotation during walking, but an active component of gait. Thus, techniques to enhance arm swing may improve gait characteristics. There is currently no portable device to measure arm swing and deliver immediate cues for larger movement. Here we test report pilot testing of such a device, ArmSense (patented), using a crossover repeated-measures design. Twelve people with PD walked in a video-recorded gym space at self-selected comfortable and fast speeds. After baseline, cues were given either visually using taped targets on the floor to increase step length or through vibrations at the wrist using ArmSense to increase arm swing amplitude. Uncued walking then followed, to assess retention. Subjects successfully reached cueing targets on >95% of steps. At a comfortable pace, step length increased during both visual cueing and ArmSense cueing. However, we observed increased medial-lateral trunk sway with visual cueing, possibly suggesting decreased gait stability. In contrast, no statistically significant changes in trunk sway were observed with ArmSense cues compared to baseline walking. At a fast pace, changes in gait parameters were less systematic. Even though ArmSense cues only specified changes in arm swing amplitude, we observed changes in multiple gait parameters, reflecting the active role arm swing plays in gait and suggesting a new therapeutic path to improve mobility in people with PD. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Can Visual Illusions Be Used to Facilitate Sport Skill Learning?

    PubMed

    Cañal-Bruland, Rouwen; van der Meer, Yor; Moerman, Jelle

    2016-01-01

    Recently it has been reported that practicing putting with visual illusions that make the hole appear larger than it actually is leads to longer-lasting performance improvements. Interestingly, from a motor control and learning perspective, it may be possible to actually predict the opposite to occur, as facing a smaller appearing target should enforce performers to be more precise. To test this idea the authors invited participants to practice an aiming task (i.e., a marble-shooting task) with either a visual illusion that made the target appear larger or a visual illusion that made the target appear smaller. They applied a pre-post test design, included a control group training without any illusory effects and increased the amount of practice to 450 trials. In contrast to earlier reports, the results revealed that the group that trained with the visual illusion that made the target look smaller improved performance from pre- to posttest, whereas the group practicing with visual illusions that made the target appear larger did not show any improvements. Notably, also the control group improved from pre- to posttest. The authors conclude that more research is needed to improve our understanding of whether and how visual illusions may be useful training tools for sport skill learning.

  1. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  2. Towards a high sensitivity small animal PET system based on CZT detectors (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, Shiva; Levin, Craig

    2017-03-01

    Small animal positron emission tomography (PET) is a biological imaging technology that allows non-invasive interrogation of internal molecular and cellular processes and mechanisms of disease. New PET molecular probes with high specificity are under development to target, detect, visualize, and quantify subtle molecular and cellular processes associated with cancer, heart disease, and neurological disorders. However, the limited uptake of these targeted probes leads to significant reduction in signal. There is a need to advance the performance of small animal PET system technology to reach its full potential for molecular imaging. Our goal is to assemble a small animal PET system based on CZT detectors and to explore methods to enhance its photon sensitivity. In this work, we reconstruct an image from a phantom using a two-panel subsystem consisting of six CZT crystals in each panel. For image reconstruction, coincidence events with energy between 450 and 570 keV were included. We are developing an algorithm to improve sensitivity of the system by including multiple interaction events.

  3. Insect Detection of Small Targets Moving in Visual Clutter

    PubMed Central

    Barnett, Paul D; O'Carroll, David C

    2006-01-01

    Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly,Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. PMID:16448249

  4. Dynamic dominance varies with handedness: reduced interlimb asymmetries in left-handers

    PubMed Central

    Przybyla, Andrzej; Good, David C.; Sainburg, Robert L.

    2013-01-01

    Our previous studies of interlimb asymmetries during reaching movements have given rise to the dynamic-dominance hypothesis of motor lateralization. This hypothesis proposes that dominant arm control has become optimized for efficient intersegmental coordination, which is often associated with straight and smooth hand-paths, while non-dominant arm control has become optimized for controlling steady-state posture, which has been associated with greater final position accuracy when movements are mechanically perturbed, and often during movements made in the absence of visual feedback. The basis for this model of motor lateralization was derived from studies conducted in right-handed subjects. We now ask whether left-handers show similar proficiencies in coordinating reaching movements. We recruited right- and left-handers (20 per group) to perform reaching movements to three targets, in which intersegmental coordination requirements varied systematically. Our results showed that the dominant arm of both left- and right-handers were well coordinated, as reflected by fairly straight hand-paths and low errors in initial direction. Consistent with our previous studies, the non-dominant arm of right-handers showed substantially greater curvature and large errors in initial direction, most notably to targets that elicited higher intersegmental interactions. While the right, non-dominant, hand-paths of left-handers were slightly more curved than those of the dominant arm, they were also substantially more accurate and better coordinated than the non-dominant arm of right-handers. Our results indicate a similar pattern, but reduced lateralization for intersegmental coordination in left-handers. These findings suggest that left-handers develop more coordinated control of their non-dominant arms than right-handers, possibly due to environmental pressure for right-handed manipulations. PMID:22113487

  5. Dynamic dominance varies with handedness: reduced interlimb asymmetries in left-handers.

    PubMed

    Przybyla, Andrzej; Good, David C; Sainburg, Robert L

    2012-02-01

    Our previous studies of interlimb asymmetries during reaching movements have given rise to the dynamic-dominance hypothesis of motor lateralization. This hypothesis proposes that dominant arm control has become optimized for efficient intersegmental coordination, which is often associated with straight and smooth hand-paths, while non-dominant arm control has become optimized for controlling steady-state posture, which has been associated with greater final position accuracy when movements are mechanically perturbed, and often during movements made in the absence of visual feedback. The basis for this model of motor lateralization was derived from studies conducted in right-handed subjects. We now ask whether left-handers show similar proficiencies in coordinating reaching movements. We recruited right- and left-handers (20 per group) to perform reaching movements to three targets, in which intersegmental coordination requirements varied systematically. Our results showed that the dominant arm of both left- and right-handers were well coordinated, as reflected by fairly straight hand-paths and low errors in initial direction. Consistent with our previous studies, the non-dominant arm of right-handers showed substantially greater curvature and large errors in initial direction, most notably to targets that elicited higher intersegmental interactions. While the right, non-dominant, hand-paths of left-handers were slightly more curved than those of the dominant arm, they were also substantially more accurate and better coordinated than the non-dominant arm of right-handers. Our results indicate a similar pattern, but reduced lateralization for intersegmental coordination in left-handers. These findings suggest that left-handers develop more coordinated control of their non-dominant arms than right-handers, possibly due to environmental pressure for right-handed manipulations.

  6. Linking express saccade occurance to stimulus properties and sensorimotor integration in the superior colliculus.

    PubMed

    Marino, Robert A; Levy, Ron; Munoz, Douglas P

    2015-08-01

    Express saccades represent the fastest possible eye movements to visual targets with reaction times that approach minimum sensory-motor conduction delays. Previous work in monkeys has identified two specific neural signals in the superior colliculus (SC: a midbrain sensorimotor integration structure involved in gaze control) that are required to execute express saccades: 1) previsual activity consisting of a low-frequency increase in action potentials in sensory-motor neurons immediately before the arrival of a visual response; and 2) a transient visual-sensory response consisting of a high-frequency burst of action potentials in visually responsive neurons resulting from the appearance of a visual target stimulus. To better understand how these two neural signals interact to produce express saccades, we manipulated the arrival time and magnitude of visual responses in the SC by altering target luminance and we examined the corresponding influences on SC activity and express saccade generation. We recorded from saccade neurons with visual-, motor-, and previsual-related activity in the SC of monkeys performing the gap saccade task while target luminance was systematically varied between 0.001 and 42.5 cd/m(2) against a black background (∼0.0001 cd/m(2)). Our results demonstrated that 1) express saccade latencies were linked directly to the arrival time in the SC of visual responses produced by abruptly appearing visual stimuli; 2) express saccades were generated toward both dim and bright targets whenever sufficient previsual activity was present; and 3) target luminance altered the likelihood of producing an express saccade. When an express saccade was generated, visuomotor neurons increased their activity immediately before the arrival of the visual response in the SC and saccade initiation. Furthermore, the visual and motor responses of visuomotor neurons merged into a single burst of action potentials, while the visual response of visual-only neurons was unaffected. A linear combination model was used to test which SC signals best predicted the likelihood of producing an express saccade. In addition to visual response magnitude and previsual activity of saccade neurons, the model identified presaccadic activity (activity occurring during the 30-ms epoch immediately before saccade initiation) as a third important signal for predicting express saccades. We conclude that express saccades can be predicted by visual, previsual, and presaccadic signals recorded from visuomotor neurons in the intermediate layers of the SC. Copyright © 2015 the American Physiological Society.

  7. Linking express saccade occurance to stimulus properties and sensorimotor integration in the superior colliculus

    PubMed Central

    Levy, Ron; Munoz, Douglas P.

    2015-01-01

    Express saccades represent the fastest possible eye movements to visual targets with reaction times that approach minimum sensory-motor conduction delays. Previous work in monkeys has identified two specific neural signals in the superior colliculus (SC: a midbrain sensorimotor integration structure involved in gaze control) that are required to execute express saccades: 1) previsual activity consisting of a low-frequency increase in action potentials in sensory-motor neurons immediately before the arrival of a visual response; and 2) a transient visual-sensory response consisting of a high-frequency burst of action potentials in visually responsive neurons resulting from the appearance of a visual target stimulus. To better understand how these two neural signals interact to produce express saccades, we manipulated the arrival time and magnitude of visual responses in the SC by altering target luminance and we examined the corresponding influences on SC activity and express saccade generation. We recorded from saccade neurons with visual-, motor-, and previsual-related activity in the SC of monkeys performing the gap saccade task while target luminance was systematically varied between 0.001 and 42.5 cd/m2 against a black background (∼0.0001 cd/m2). Our results demonstrated that 1) express saccade latencies were linked directly to the arrival time in the SC of visual responses produced by abruptly appearing visual stimuli; 2) express saccades were generated toward both dim and bright targets whenever sufficient previsual activity was present; and 3) target luminance altered the likelihood of producing an express saccade. When an express saccade was generated, visuomotor neurons increased their activity immediately before the arrival of the visual response in the SC and saccade initiation. Furthermore, the visual and motor responses of visuomotor neurons merged into a single burst of action potentials, while the visual response of visual-only neurons was unaffected. A linear combination model was used to test which SC signals best predicted the likelihood of producing an express saccade. In addition to visual response magnitude and previsual activity of saccade neurons, the model identified presaccadic activity (activity occurring during the 30-ms epoch immediately before saccade initiation) as a third important signal for predicting express saccades. We conclude that express saccades can be predicted by visual, previsual, and presaccadic signals recorded from visuomotor neurons in the intermediate layers of the SC. PMID:26063770

  8. Ricin A chain reaches the endoplasmic reticulum after endocytosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Qiong; Department of Biochemistry and Molecular Biology, Ningbo University Medical School, Ningbo 315211; Zhan Jinbiao

    Ricin is a potent ribosome inactivating protein and now has been widely used for synthesis of immunotoxins. To target ribosome in the mammalian cytosol, ricin must firstly retrograde transport from the endomembrane system to reach the endoplasmic reticulum (ER) where the ricin A chain (RTA) is recognized by ER components that facilitate its membrane translocation to the cytosol. In the study, the fusion gene of enhanced green fluorescent protein (EGFP)-RTA was expressed with the pET-28a (+) system in Escherichia coli under the control of a T7 promoter. The fusion protein showed a green fluorescence. The recombinant protein can be purifiedmore » by metal chelated affinity chromatography on a column of NTA. The rabbit anti-GFP antibody can recognize the fusion protein of EGFP-RTA just like the EGFP protein. The cytotoxicity of EGFP-RTA and RTA was evaluated by the MTT assay in HeLa and HEP-G2 cells following fluid-phase endocytosis. The fusion protein had a similar cytotoxicity of RTA. After endocytosis, the subcellular location of the fusion protein can be observed with the laser scanning confocal microscopy and the immuno-gold labeling Electro Microscopy. This study provided important evidence by a visualized way to prove that RTA does reach the endoplasmic reticulum.« less

  9. Control and prediction components of movement planning in stuttering vs. nonstuttering adults

    PubMed Central

    Daliri, Ayoub; Prokopenko, Roman A.; Flanagan, J. Randall; Max, Ludo

    2014-01-01

    Purpose Stuttering individuals show speech and nonspeech sensorimotor deficiencies. To perform accurate movements, the sensorimotor system needs to generate appropriate control signals and correctly predict their sensory consequences. Using a reaching task, we examined the integrity of these control and prediction components, separately, for movements unrelated to the speech motor system. Method Nine stuttering and nine nonstuttering adults made fast reaching movements to visual targets while sliding an object under the index finger. To quantify control, we determined initial direction error and end-point error. To quantify prediction, we calculated the correlation between vertical and horizontal forces applied to the object—an index of how well vertical force (preventing slip) anticipated direction-dependent variations in horizontal force (moving the object). Results Directional and end-point error were significantly larger for the stuttering group. Both groups performed similarly in scaling vertical force with horizontal force. Conclusions The stuttering group's reduced reaching accuracy suggests limitations in generating control signals for voluntary movements, even for non-orofacial effectors. Typical scaling of vertical force with horizontal force suggests an intact ability to predict the consequences of planned control signals. Stuttering may be associated with generalized deficiencies in planning control signals rather than predicting the consequences of those signals. PMID:25203459

  10. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.

  11. Effects of microgravity on vestibular development and function in rats: genetics and environment

    NASA Technical Reports Server (NTRS)

    Ronca, A. E.; Fritzsch, B.; Alberts, J. R.; Bruce, L. L.

    2000-01-01

    Our anatomical and behavioral studies of embryonic rats that developed in microgravity suggest that the vestibular sensory system, like the visual system, has genetically mediated processes of development that establish crude connections between the periphery and the brain. Environmental stimuli also regulate connection formation including terminal branch formation and fine-tuning of synaptic contacts. Axons of vestibular sensory neurons from gravistatic as well as linear acceleration receptors reach their targets in both microgravity and normal gravity, suggesting that this is a genetically regulated component of development. However, microgravity exposure delays the development of terminal branches and synapses in gravistatic but not linear acceleration-sensitive neurons and also produces behavioral changes. These latter changes reflect environmentally controlled processes of development.

  12. Rhythmic Oscillations of Visual Contrast Sensitivity Synchronized with Action

    PubMed Central

    Tomassini, Alice; Spinelli, Donatella; Jacono, Marco; Sandini, Giulio; Morrone, Maria Concetta

    2016-01-01

    It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, ~500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop. PMID:25948254

  13. Categorically Defined Targets Trigger Spatiotemporal Visual Attention

    ERIC Educational Resources Information Center

    Wyble, Brad; Bowman, Howard; Potter, Mary C.

    2009-01-01

    Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…

  14. Visually-guided attention enhances target identification in a complex auditory scene.

    PubMed

    Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G

    2007-06-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.

  15. Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene

    PubMed Central

    Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.

    2007-01-01

    In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308

  16. Perceiver as polar planimeter: Direct perception of jumping, reaching, and jump-reaching affordances for the self and others.

    PubMed

    Thomas, Brandon J; Hawkins, Matthew M; Nalepka, Patrick

    2017-03-30

    Runeson (Scandanavian Journal of Psychology 18:172-179, 1977) suggested that the polar planimeter might serve as an informative model system of perceptual mechanism. The key aspect of the polar planimeter is that it registers a higher order property of the environment without computational mediation on the basis of lower order properties, detecting task-specific information only. This aspect was posited as a hypothesis for the perception of jumping and reaching affordances for the self and another person. The findings supported this hypothesis. The perception of reaching while jumping significantly differed from an additive combination of jump-without-reaching and reach-without-jumping perception. The results are consistent with Gibson's (The senses considered as perceptual systems, Houghton Mifflin, Boston, MA; Gibson, The senses considered as perceptual systems, Houghton Mifflin, Boston, MA, 1966; The ecological approach to visual perception, Houghton Mifflin, Boston, MA; Gibson, The ecological approach to visual perception, Houghton Mifflin, Boston, MA, 1979) theory of information-that aspects of the environment are specified by patterns in energetic media.

  17. Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.

    PubMed

    Gabbard, Carl; Ammar, Diala; Cordova, Alberto

    2009-01-01

    We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.

  18. Visual Masking During Pursuit Eye Movements

    ERIC Educational Resources Information Center

    White, Charles W.

    1976-01-01

    Visual masking occurs when one stimulus interferes with the perception of another stimulus. Investigates which matters more for visual masking--that the target and masking stimuli are flashed on the same part of the retina, or, that the target and mask appear in the same place. (Author/RK)

  19. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  20. Memory-guided saccade processing in visual form agnosia (patient DF).

    PubMed

    Rossit, Stéphanie; Szymanek, Larissa; Butler, Stephen H; Harvey, Monika

    2010-01-01

    According to Milner and Goodale's model (The visual brain in action, Oxford University Press, Oxford, 2006) areas in the ventral visual stream mediate visual perception and oV-line actions, whilst regions in the dorsal visual stream mediate the on-line visual control of action. Strong evidence for this model comes from a patient (DF), who suffers from visual form agnosia after bilateral damage to the ventro-lateral occipital region, sparing V1. It has been reported that she is normal in immediate reaching and grasping, yet severely impaired when asked to perform delayed actions. Here we investigated whether this dissociation would extend to saccade execution. Neurophysiological studies and TMS work in humans have shown that the posterior parietal cortex (PPC), on the right in particular (supposedly spared in DF), is involved in the control of memory-guided saccades. Surprisingly though, we found that, just as reported for reaching and grasping, DF's saccadic accuracy was much reduced in the memory compared to the stimulus-guided condition. These data support the idea of a tight coupling of eye and hand movements and further suggest that dorsal stream structures may not be sufficient to drive memory-guided saccadic performance.

  1. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations

    PubMed Central

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M.; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886

  2. Putative pyramidal neurons and interneurons in the monkey parietal cortex make different contributions to the performance of a visual grouping task.

    PubMed

    Yokoi, Isao; Komatsu, Hidehiko

    2010-09-01

    Visual grouping of discrete elements is an important function for object recognition. We recently conducted an experiment to study neural correlates of visual grouping. We recorded neuronal activities while monkeys performed a grouping detection task in which they discriminated visual patterns composed of discrete dots arranged in a cross and detected targets in which dots with the same contrast were aligned horizontally or vertically. We found that some neurons in the lateral bank of the intraparietal sulcus exhibit activity related to visual grouping. In the present study, we analyzed how different types of neurons contribute to visual grouping. We classified the recorded neurons as putative pyramidal neurons or putative interneurons, depending on the duration of their action potentials. We found that putative pyramidal neurons exhibited selectivity for the orientation of the target, and this selectivity was enhanced by attention to a particular target orientation. By contrast, putative interneurons responded more strongly to the target stimuli than to the nontargets, regardless of the orientation of the target. These results suggest that different classes of parietal neurons contribute differently to the grouping of discrete elements.

  3. Visual search in divided areas: dividers initially interfere with and later facilitate visual search.

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2013-02-01

    A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.

  4. Driver landmark and traffic sign identification in early Alzheimer's disease.

    PubMed

    Uc, E Y; Rizzo, M; Anderson, S W; Shi, Q; Dawson, J D

    2005-06-01

    To assess visual search and recognition of roadside targets and safety errors during a landmark and traffic sign identification task in drivers with Alzheimer's disease. 33 drivers with probable Alzheimer's disease of mild severity and 137 neurologically normal older adults underwent a battery of visual and cognitive tests and were asked to report detection of specific landmarks and traffic signs along a segment of an experimental drive. The drivers with mild Alzheimer's disease identified significantly fewer landmarks and traffic signs and made more at-fault safety errors during the task than control subjects. Roadside target identification performance and safety errors were predicted by scores on standardised tests of visual and cognitive function. Drivers with Alzheimer's disease are impaired in a task of visual search and recognition of roadside targets; the demands of these targets on visual perception, attention, executive functions, and memory probably increase the cognitive load, worsening driving safety.

  5. Abnormalities of fixation, saccade and pursuit in posterior cortical atrophy

    PubMed Central

    Kaski, Diego; Yong, Keir X. X.; Paterson, Ross W.; Slattery, Catherine F.; Ryan, Natalie S.; Schott, Jonathan M.; Crutch, Sebastian J.

    2015-01-01

    The clinico-neuroradiological syndrome posterior cortical atrophy is the cardinal ‘visual dementia’ and most common atypical Alzheimer’s disease phenotype, offering insights into mechanisms underlying clinical heterogeneity, pathological propagation and basic visual phenomena (e.g. visual crowding). Given the extensive attention paid to patients’ (higher order) perceptual function, it is surprising that there have been no systematic analyses of basic oculomotor function in this population. Here 20 patients with posterior cortical atrophy, 17 patients with typical Alzheimer’s disease and 22 healthy controls completed tests of fixation, saccade (including fixation/target gap and overlap conditions) and smooth pursuit eye movements using an infrared pupil-tracking system. Participants underwent detailed neuropsychological and neurological examinations, with a proportion also undertaking brain imaging and analysis of molecular pathology. In contrast to informal clinical evaluations of oculomotor dysfunction frequency (previous studies: 38%, current clinical examination: 33%), detailed eyetracking investigations revealed eye movement abnormalities in 80% of patients with posterior cortical atrophy (compared to 17% typical Alzheimer’s disease, 5% controls). The greatest differences between posterior cortical atrophy and typical Alzheimer’s disease were seen in saccadic performance. Patients with posterior cortical atrophy made significantly shorter saccades especially for distant targets. They also exhibited a significant exacerbation of the normal gap/overlap effect, consistent with ‘sticky fixation’. Time to reach saccadic targets was significantly associated with parietal and occipital cortical thickness measures. On fixation stability tasks, patients with typical Alzheimer’s disease showed more square wave jerks whose frequency was associated with lower cerebellar grey matter volume, while patients with posterior cortical atrophy showed large saccadic intrusions whose frequency correlated significantly with generalized reductions in cortical thickness. Patients with both posterior cortical atrophy and typical Alzheimer’s disease showed lower gain in smooth pursuit compared to controls. The current study establishes that eye movement abnormalities are near-ubiquitous in posterior cortical atrophy, and highlights multiple aspects of saccadic performance which distinguish posterior cortical atrophy from typical Alzheimer’s disease. We suggest the posterior cortical atrophy oculomotor profile (e.g. exacerbation of the saccadic gap/overlap effect, preserved saccadic velocity) reflects weak input from degraded occipito-parietal spatial representations of stimulus location into a superior collicular spatial map for eye movement regulation. This may indicate greater impairment of identification of oculomotor targets rather than generation of oculomotor movements. The results highlight the critical role of spatial attention and object identification but also precise stimulus localization in explaining the complex real world perception deficits observed in posterior cortical atrophy and many other patients with dementia-related visual impairment. PMID:25895507

  6. Cumulative sum analysis score and phacoemulsification competency learning curve.

    PubMed

    Vedana, Gustavo; Cardoso, Filipe G; Marcon, Alexandre S; Araújo, Licio E K; Zanon, Matheus; Birriel, Daniella C; Watte, Guilherme; Jun, Albert S

    2017-01-01

    To use the cumulative sum analysis score (CUSUM) to construct objectively the learning curve of phacoemulsification competency. Three second-year residents and an experienced consultant were monitored for a series of 70 phacoemulsification cases each and had their series analysed by CUSUM regarding posterior capsule rupture (PCR) and best-corrected visual acuity. The acceptable rate for PCR was <5% (lower limit h) and the unacceptable rate was >10% (upper limit h). The acceptable rate for best-corrected visual acuity worse than 20/40 was <10% (lower limit h) and the unacceptable rate was >20% (upper limit h). The area between lower limit h and upper limit h is called the decision interval. There was no statistically significant difference in the mean age, sex or cataract grades between groups. The first trainee achieved PCR CUSUM competency at his 22 nd case. His best-corrected visual acuity CUSUM was in the decision interval from his third case and stayed there until the end, never reaching competency. The second trainee achieved PCR CUSUM competency at his 39 th case. He could reach best-corrected visual acuity CUSUM competency at his 22 nd case. The third trainee achieved PCR CUSUM competency at his 41 st case. He reached best-corrected visual acuity CUSUM competency at his 14 th case. The learning curve of competency in phacoemulsification is constructed by CUSUM and in average took 38 cases for each trainee to achieve it.

  7. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  8. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  9. Color priming in pop-out search depends on the relative color of the target

    PubMed Central

    Becker, Stefanie I.; Valuch, Christian; Ansorge, Ulrich

    2014-01-01

    In visual search for pop-out targets, search times are shorter when the target and non-target colors from the previous trial are repeated than when they change. This priming effect was originally attributed to a feature weighting mechanism that biases attention toward the target features, and away from the non-target features. However, more recent studies have shown that visual selection is strongly context-dependent: according to a relational account of feature priming, the target color is always encoded relative to the non-target color (e.g., as redder or greener). The present study provides a critical test of this hypothesis, by varying the colors of the search items such that either the relative color or the absolute color of the target always remained constant (or both). The results clearly show that color priming depends on the relative color of a target with respect to the non-targets but not on its absolute color value. Moreover, the observed priming effects did not change over the course of the experiment, suggesting that the visual system encodes colors in a relative manner from the start of the experiment. Taken together, these results strongly support a relational account of feature priming in visual search, and are inconsistent with the dominant feature-based views. PMID:24782795

  10. Proposed New Vision Standards for the 1980’s and Beyond: Contrast Sensitivity

    DTIC Science & Technology

    1981-09-01

    spatial frequency, visual acuity, target aquistion, visual filters, spatial filtering, target detection, recognitio identification, eye charts, workload...visual standards, as well as other performance criteria, are required to be thown relevant to "real-world" performance before acceptance. On the sur- face

  11. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    PubMed

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  12. Neural images of pursuit targets in the photoreceptor arrays of male and female houseflies Musca domestica.

    PubMed

    Burton, Brian G; Laughlin, Simon B

    2003-11-01

    Male houseflies use a sex-specific frontal eye region, the lovespot, to detect and pursue mates. We recorded the electrical responses of photoreceptors to optical stimuli that simulate the signals received by a male or female photoreceptor as a conspecific passes through its field of view. We analysed the ability of male and female frontal photoreceptors to code conspecifics over the range of speeds and distances encountered during pursuit, and reconstructed the neural images of these targets in photoreceptor arrays. A male's lovespot photoreceptor detects a conspecific at twice the distance of a female photoreceptor, largely through better optics. This detection distance greatly exceeds those reported in previous behavioural studies. Lovespot photoreceptors respond more strongly than female photoreceptors to targets tracked during pursuit, with amplitudes reaching 25 mV. The male photoreceptor also has a faster response, exhibits a unique preference for stimuli of 20-30 ms duration that selects for conspecifics and deblurs moving images with response transients. White-noise analysis substantially underestimates these improvements. We conclude that in the lovespot, both optics and phototransduction are specialised to enhance and deblur the neural images of moving targets, and propose that analogous mechanisms may sharpen the neural image still further as it is transferred to visual interneurones.

  13. Highly sensitive and specific colorimetric detection of cancer cells via dual-aptamer target binding strategy.

    PubMed

    Wang, Kun; Fan, Daoqing; Liu, Yaqing; Wang, Erkang

    2015-11-15

    Simple, rapid, sensitive and specific detection of cancer cells is of great importance for early and accurate cancer diagnostics and therapy. By coupling nanotechnology and dual-aptamer target binding strategies, we developed a colorimetric assay for visually detecting cancer cells with high sensitivity and specificity. The nanotechnology including high catalytic activity of PtAuNP and magnetic separation & concentration plays a vital role on the signal amplification and improvement of detection sensitivity. The color change caused by small amount of target cancer cells (10 cells/mL) can be clearly distinguished by naked eyes. The dual-aptamer target binding strategy guarantees the detection specificity that large amount of non-cancer cells and different cancer cells (10(4) cells/mL) cannot cause obvious color change. A detection limit as low as 10 cells/mL with detection linear range from 10 to 10(5) cells/mL was reached according to the experimental detections in phosphate buffer solution as well as serum sample. The developed enzyme-free and cost effective colorimetric assay is simple and no need of instrument while still provides excellent sensitivity, specificity and repeatability, having potential application on point-of-care cancer diagnosis. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.

    PubMed

    Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru

    2015-01-01

    Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.

  15. Factors associated with reaching or not reaching target HbA1c after initiation of basal or premixed insulin in patients with type 2 diabetes.

    PubMed

    Scheen, A J; Schmitt, H; Jiang, H H; Ivanyi, T

    2017-02-01

    To evaluate factors associated with reaching or not reaching target glycated haemoglobin (HbA 1c ) levels by analysing the respective contributions of fasting hyperglycaemia (FHG), also referred to as basal hyperglycaemia, vs postprandial hyperglycaemia (PHG) before and after initiation of a basal or premixed insulin regimen in patients with type 2 diabetes. This post-hoc analysis of insulin-naïve patients in the DURABLE study randomised to receive either insulin glargine or insulin lispro mix 25 evaluated the percentages of patients achieving a target HbA 1c of <7.0% (<53mmol/mol) per baseline HbA 1c quartiles, and the effect of each insulin regimen on the relative contributions of PHG and FHG to overall hyperglycaemia. Patients had comparable demographic characteristics and similar HbA 1c and FHG values at baseline in each HbA 1c quartile regardless of whether they reached the target HbA 1c . The higher the HbA 1c quartile, the greater was the decrease in HbA 1c , but also the smaller the percentage of patients achieving the target HbA 1c . HbA 1c and FHG decreased more in patients reaching the target, resulting in significantly lower values at endpoint in all baseline HbA 1c quartiles with either insulin treatment. Patients not achieving the target HbA 1c had slightly higher insulin doses, but lower total hypoglycaemia rates. Smaller decreases in FHG were associated with not reaching the target HbA 1c , suggesting a need to increase basal or premixed insulin doses to achieve targeted fasting plasma glucose and improve patient response before introducing more intensive prandial insulin regimens. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  16. CDKL5 protein substitution therapy rescues neurological phenotypes of a mouse model of CDKL5 disorder.

    PubMed

    Trazzi, Stefania; De Franceschi, Marianna; Fuchs, Claudia; Bastianini, Stefano; Viggiano, Rocchina; Lupori, Leonardo; Mazziotti, Raffaele; Medici, Giorgio; Lo Martire, Viviana; Ren, Elisa; Rimondini, Roberto; Zoccoli, Giovanna; Bartesaghi, Renata; Pizzorusso, Tommaso; Ciani, Elisabetta

    2018-05-01

    Cyclin-dependent kinase like-5 (CDKL5) disorder is a rare neurodevelopmental disease caused by mutations in the CDKL5 gene. The consequent misexpression of the CDKL5 protein in the nervous system leads to a severe phenotype characterized by intellectual disability, motor impairment, visual deficits and early-onset epilepsy. No therapy is available for CDKL5 disorder. It has been reported that a protein transduction domain (TAT) is able to deliver macromolecules into cells and even into the brain when fused to a given protein. We demonstrate that TAT-CDKL5 fusion protein is efficiently internalized by target cells and retains CDKL5 activity. Intracerebroventricular infusion of TAT-CDKL5 restored hippocampal development, hippocampus-dependent memory and breathing pattern in Cdkl5-null mice. Notably, systemically administered TAT-CDKL5 protein passed the blood-brain-barrier, reached the CNS, and rescued various neuroanatomical and behavioral defects, including breathing pattern and visual responses. Our results suggest that CDKL5 protein therapy may be an effective clinical tool for the treatment of CDKL5 disorder.

  17. Hand effects on mentally simulated reaching.

    PubMed

    Gabbard, Carl; Ammar, Diala; Rodrigues, Luis

    2005-08-01

    Within the area of simulated (imagined) versus actual movement research, investigators have discovered that mentally simulated movements, like real actions, are controlled primarily by the hemispheres contralateral to the simulated limb. Furthermore, evidence points to a left-brain advantage for accuracy of simulated movements. With this information it could be suggested that, compared to left-handers, most right-handers would have an advantage. To test this hypothesis, strong right- and left-handers were compared on judgments of perceived reachability to visual targets lasting 150 ms in multiple locations of midline, right- and left-visual field (RVF/LVF). In reference to within group responses, we found no hemispheric or hand use advantage for right-handers. Although left-handers revealed no hemispheric advantage, there was a significant hand effect, favoring the non-dominant limb, most notably in LVF. This finding is explained in regard to a possible interference effect for left-handers, not shown for right-handers. Overall, left-handers displayed significantly more errors across hemispace. Therefore, it appears that when comparing hand groups, a left-hemisphere advantage favoring right-handers is plausible.

  18. Transient visual pathway critical for normal development of primate grasping behavior.

    PubMed

    Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C; Vidaurre, Diego; Teo, Leon; Homman-Ludiye, Jihane; Goodale, Melvyn A; Leopold, David A; Bourne, James A

    2018-02-06

    An evolutionary hallmark of anthropoid primates, including humans, is the use of vision to guide precise manual movements. These behaviors are reliant on a specialized visual input to the posterior parietal cortex. Here, we show that normal primate reaching-and-grasping behavior depends critically on a visual pathway through the thalamic pulvinar, which is thought to relay information to the middle temporal (MT) area during early life and then swiftly withdraws. Small MRI-guided lesions to a subdivision of the inferior pulvinar subnucleus (PIm) in the infant marmoset monkey led to permanent deficits in reaching-and-grasping behavior in the adult. This functional loss coincided with the abnormal anatomical development of multiple cortical areas responsible for the guidance of actions. Our study reveals that the transient retino-pulvinar-MT pathway underpins the development of visually guided manual behaviors in primates that are crucial for interacting with complex features in the environment.

  19. Cortex Inspired Model for Inverse Kinematics Computation for a Humanoid Robotic Finger

    PubMed Central

    Gentili, Rodolphe J.; Oh, Hyuk; Molina, Javier; Reggia, James A.; Contreras-Vidal, José L.

    2013-01-01

    In order to approach human hand performance levels, artificial anthropomorphic hands/fingers have increasingly incorporated human biomechanical features. However, the performance of finger reaching movements to visual targets involving the complex kinematics of multi-jointed, anthropomorphic actuators is a difficult problem. This is because the relationship between sensory and motor coordinates is highly nonlinear, and also often includes mechanical coupling of the two last joints. Recently, we developed a cortical model that learns the inverse kinematics of a simulated anthropomorphic finger. Here, we expand this previous work by assessing if this cortical model is able to learn the inverse kinematics for an actual anthropomorphic humanoid finger having its two last joints coupled and controlled by pneumatic muscles. The findings revealed that single 3D reaching movements, as well as more complex patterns of motion of the humanoid finger, were accurately and robustly performed by this cortical model while producing kinematics comparable to those of humans. This work contributes to the development of a bioinspired controller providing adaptive, robust and flexible control of dexterous robotic and prosthetic hands. PMID:23366569

  20. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements.

    PubMed

    Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R

    2017-07-01

    Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A perceptual learning deficit in Chinese developmental dyslexia as revealed by visual texture discrimination training.

    PubMed

    Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin

    2014-08-01

    Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Vision Drives Correlated Activity without Patterned Spontaneous Activity in Developing Xenopus Retina

    PubMed Central

    Demas, James A.; Payne, Hannah; Cline, Hollis T.

    2011-01-01

    Developing amphibians need vision to avoid predators and locate food before visual system circuits fully mature. Xenopus tadpoles can respond to visual stimuli as soon as retinal ganglion cells (RGCs) innervate the brain, however, in mammals, chicks and turtles, RGCs reach their central targets many days, or even weeks, before their retinas are capable of vision. In the absence of vision, activity-dependent refinement in these amniote species is mediated by waves of spontaneous activity that periodically spread across the retina, correlating the firing of action potentials in neighboring RGCs. Theory suggests that retinorecipient neurons in the brain use patterned RGC activity to sharpen the retinotopy first established by genetic cues. We find that in both wild type and albino Xenopus tadpoles, RGCs are spontaneously active at all stages of tadpole development studied, but their population activity never coalesces into waves. Even at the earliest stages recorded, visual stimulation dominates over spontaneous activity and can generate patterns of RGC activity similar to the locally correlated spontaneous activity observed in amniotes. In addition, we show that blocking AMPA and NMDA type glutamate receptors significantly decreases spontaneous activity in young Xenopus retina, but that blocking GABAA receptor blockers does not. Our findings indicate that vision drives correlated activity required for topographic map formation. They further suggest that developing retinal circuits in the two major subdivisions of tetrapods, amphibians and amniotes, evolved different strategies to supply appropriately patterned RGC activity to drive visual circuit refinement. PMID:21312343

  3. Deployment of spatial attention to words in central and peripheral vision.

    PubMed

    Ducrot, Stéphanie; Grainger, Jonathan

    2007-05-01

    Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.

  4. Signals in inferotemporal and perirhinal cortex suggest an “untangling” of visual target information

    PubMed Central

    Pagan, Marino; Urban, Luke S.; Wohl, Margot P.; Rust, Nicole C.

    2013-01-01

    Finding sought visual targets requires our brains to flexibly combine working memory information about what we are looking for with visual information about what we are looking at. To investigate the neural computations involved in finding visual targets, we recorded neural responses in inferotemporal (IT) and perirhinal (PRH) cortex as macaque monkeys performed a task that required them to find targets within sequences of distractors. We found similar amounts of total task-specific information in both areas, however, information about whether a target was in view was more accessible using a linear read-out (i.e. was more “untangled”) in PRH. Consistent with the flow of information from IT to PRH, we also found that task-relevant information arrived earlier in IT. PRH responses were well-described by a functional model in which “untangling” computations in PRH reformat input from IT by combining neurons with asymmetric tuning correlations for target matches and distractors. PMID:23792943

  5. Running the figure to the ground: figure-ground segmentation during visual search.

    PubMed

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. The role of visual attention in multiple object tracking: evidence from ERPs.

    PubMed

    Doran, Matthew M; Hoffman, James E

    2010-01-01

    We examined the role of visual attention in the multiple object tracking (MOT) task by measuring the amplitude of the N1 component of the event-related potential (ERP) to probe flashes presented on targets, distractors, or empty background areas. We found evidence that visual attention enhances targets and suppresses distractors (Experiment 1 & 3). However, we also found that when tracking load was light (two targets and two distractors), accurate tracking could be carried out without any apparent contribution from the visual attention system (Experiment 2). Our results suggest that attentional selection during MOT is flexibly determined by task demands as well as tracking load and that visual attention may not always be necessary for accurate tracking.

  7. Contingent capture of involuntary visual spatial attention does not differ between normally hearing children and proficient cochlear implant users.

    PubMed

    Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill

    2014-01-01

    Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.

  8. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates were very similar. In conclusion, the using of 2D environments in virtual therapy may be a more appropriate and comfortable way to perform tasks for upper limb rehabilitation of post-stroke patients, in terms of accuracy in order to effectuate optimal kinematic trajectories. PMID:27616992

  9. Toward Optimal Target Placement for Neural Prosthetic Devices

    PubMed Central

    Cunningham, John P.; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.

    2008-01-01

    Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses. PMID:18829845

  10. Progress in sensor performance testing, modeling and range prediction using the TOD method: an overview

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander

    2017-05-01

    The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.

  11. Visually guided locomotion and computation of time-to-collision in the mongolian gerbil (Meriones unguiculatus): the effects of frontal and visual cortical lesions.

    PubMed

    Shankar, S; Ellard, C

    2000-02-01

    Past research has indicated that many species use the time-to-collision variable but little is known about its neural underpinnings in rodents. In a set of three experiments we set out to replicate and extend the findings of Sun et al. (Sun H-J, Carey DP, Goodale MA. Exp Brain Res 1992;91:171-175) in a visually guided task in Mongolian gerbils, and then investigated the effects of lesions to different cortical areas. We trained Mongolian gerbils to run in the dark toward a target on a computer screen. In some trials the target changed in size as the animal ran toward it in such a way as to produce 'virtual targets' if the animals were using time-to-collision or contact information. In experiment 1 we confirmed that gerbils use time-to-contact information to modulate their speed of running toward a target. In experiment 2 we established that visual cortex lesions attenuate the ability of lesioned animals to use information from the visual target to guide their run, while frontal cortex lesioned animals are not as severely affected. In experiment 3 we found that small radio-frequency lesions, of either area VI or of the lateral extrastriate regions of the visual cortex also affected the use of information from the target to modulate locomotion.

  12. The contributions of vision and haptics to reaching and grasping

    PubMed Central

    Stone, Kayla D.; Gonzalez, Claudia L. R.

    2015-01-01

    This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777

  13. Carpe Diem: Seizing the Common Core with Visual Thinking Strategies in the Visual Arts Classroom

    ERIC Educational Resources Information Center

    Franco, Mary; Unrath, Kathleen

    2014-01-01

    This article demonstrates how Visual Thinking Strategies (VTS) art discussions and subsequent, inspired artmaking can help reach the goals of the Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, & Technical Subjects (CCSS-ELA). The authors describe how this was achieved in a remedial…

  14. Reaching Hard-to-Reach Individuals: Nonselective Versus Targeted Outbreak Response Vaccination for Measles

    PubMed Central

    Minetti, Andrea; Hurtado, Northan; Grais, Rebecca F.; Ferrari, Matthew

    2014-01-01

    Current mass vaccination campaigns in measles outbreak response are nonselective with respect to the immune status of individuals. However, the heterogeneity in immunity, due to previous vaccination coverage or infection, may lead to potential bias of such campaigns toward those with previous high access to vaccination and may result in a lower-than-expected effective impact. During the 2010 measles outbreak in Malawi, only 3 of the 8 districts where vaccination occurred achieved a measureable effective campaign impact (i.e., a reduction in measles cases in the targeted age groups greater than that observed in nonvaccinated districts). Simulation models suggest that selective campaigns targeting hard-to-reach individuals are of greater benefit, particularly in highly vaccinated populations, even for low target coverage and with late implementation. However, the choice between targeted and nonselective campaigns should be context specific, achieving a reasonable balance of feasibility, cost, and expected impact. In addition, it is critical to develop operational strategies to identify and target hard-to-reach individuals. PMID:24131555

  15. Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets.

    PubMed

    Mahr, Angela; Wentura, Dirk

    2014-02-01

    Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.

  16. Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator

    NASA Astrophysics Data System (ADS)

    Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi

    Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.

  17. Why are there eccentricity effects in visual search? Visual and attentional hypotheses.

    PubMed

    Wolfe, J M; O'Neill, P; Bennett, S C

    1998-01-01

    In standard visual search experiments, observers search for a target item among distracting items. The locations of target items are generally random within the display and ignored as a factor in data analysis. Previous work has shown that targets presented near fixation are, in fact, found more efficiently than are targets presented at more peripheral locations. This paper proposes that the primary cause of this "eccentricity effect" (Carrasco, Evert, Chang, & Katz, 1995) is an attentional bias that allocates attention preferentially to central items. The first four experiments dealt with the possibility that visual, and not attentional, factors underlie the eccentricity effect. They showed that the eccentricity effect cannot be accounted for by the peripheral reduction in visual sensitivity, peripheral crowding, or cortical magnification. Experiment 5 tested the attention allocation model and also showed that RT x set size effects can be independent of eccentricity effects. Experiment 6 showed that the effective set size in a search task depends, in part, on the eccentricity of the target because observers search from fixation outward.

  18. Infantile nystagmus adapts to visual demand.

    PubMed

    Wiggins, Debbie; Woodhouse, J Margaret; Margrain, Tom H; Harris, Christopher M; Erichsen, Jonathan T

    2007-05-01

    To determine the effect of visual demand on the nystagmus waveform. Individuals with infantile nystagmus syndrome (INS) commonly report that making an effort to see can intensify their nystagmus and adversely affect vision. However, such an effect has never been confirmed experimentally. The eye movement behavior of 11 subjects with INS were recorded at different gaze angles while the subjects viewed visual targets under two conditions: above and then at resolution threshold. Eye movements were recorded by infrared oculography and visual acuity (VA) was measured using Landolt C targets and a two-alternative, forced-choice (2AFC) staircase procedure. Eye movement data were analyzed at the null zone for changes in amplitude, frequency, intensity, and foveation characteristics. Waveform type was also noted under the two conditions. Data from 11 subjects revealed a significant reduction in nystagmus amplitude (P < 0.05), frequency (P < 0.05), and intensity (P < 0.01) when target size was at visual threshold. The percentage of time the eye spent within the low-velocity window (i.e., foveation) significantly increased when target size was at visual threshold (P < 0.05). Furthermore, a change in waveform type with increased visual demand was exhibited by two subjects. The results indicate that increased visual demand modifies the nystagmus waveform favorably (and possibly adaptively), producing a significant reduction in nystagmus intensity and prolonged foveation. These findings contradict previous anecdotal reports that visual effort intensifies the nystagmus eye movement at the cost of visual performance. This discrepancy may be attributable to the lack of psychological stress involved in the visual task reported here. This is consistent with the suggestion that it is the visual importance of the task to the individual rather than visual demand per se which exacerbates INS. Further studies are needed to investigate quantitatively the effects of stress and psychological factors on INS waveforms.

  19. Visual impairment at baseline is associated with future poor physical functioning among middle-aged women: The Study of Women's Health Across the Nation, Michigan Site.

    PubMed

    Chandrasekaran, Navasuja; Harlow, Sioban; Moroi, Sayoko; Musch, David; Peng, Qing; Karvonen-Gutierrez, Carrie

    2017-02-01

    Emerging evidence suggests that the prevalence rates of poor functioning and of disability are increasing among middle-aged individuals. Visual impairment is associated with poor functioning among older adults but little is known about the impact of vision on functioning during midlife. The objective of this study was to assess the impact of visual impairment on future physical functioning among middle-aged women. In this longitudinal study, the sample consisted of 483 women aged 42 to 56 years, from the Michigan site of the Study of Women's Health Across the Nation. At baseline, distance and near vision were measured using a Titmus vision screener. Visual impairment was defined as visual acuity worse than 20/40. Physical functioning was measured up to 10 years later using performance-based measures, including a 40-foot timed walk, timed stair climb and forward reach. Women with impaired distance vision at baseline had 2.81 centimeters less forward reach distance (95% confidence interval (CI): -4.19, -1.42) and 4.26s longer stair climb time (95% CI: 2.73, 5.79) at follow-up than women without impaired distance vision. Women with impaired near vision also had less forward reach distance (2.26 centimeters, 95% CI: -3.30, -1.21) than those without impaired near vision. Among middle-aged women, visual impairment is a marker of poor physical functioning. Routine eye testing and vision correction may help improve physical functioning among midlife individuals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Visual Impairment at Baseline is Associated with Future Poor Physical Functioning Among Middle-Aged Women: The Study of Women's Health Across the Nation, Michigan site

    PubMed Central

    Chandrasekaran, Navasuja; Harlow, Sioban; Moroi, Sayoko; Musch, David; Peng, Qing; Karvonen-Gutierrez, Carrie

    2016-01-01

    Objectives Emerging evidence suggests that the prevalence rates of poor functioning and of disability are increasing among middle-aged individuals. Visual impairment is associated with poor functioning among older adults but little is known about the impact of vision on functioning during midlife. The objective of this study was to assess the impact of visual impairment on future physical functioning among middle-aged women. Study design In this longitudinal study, the sample consisted of 483 women aged 42 to 56 years, from the Michigan site of the Study of Women's Health Across the Nation. Main Outcome Measures At baseline, distance and near vision were measured using a Titmus vision screener. Visual impairment was defined as visual acuity worse than 20/40. Physical functioning was measured up to 10 years later using performance-based measures, including a 40-foot timed walk, timed stair climb and forward reach. Results Women with impaired distance vision at baseline had 2.81 centimeters less forward reach distance (95% confidence interval (CI): −4.19,−1.42) and 4.26 seconds longer stair climb time (95% CI: 2.73, 5.79) at follow-up than women without impaired distance vision. Women with impaired near vision also had less forward reach distance (2.26 centimeters, 95% CI: −3.30,−1.21) than those without impaired near vision. Conclusion Among middle-aged women, visual impairment is a marker of poor physical functioning. Routine eye testing and vision correction may help improve physical functioning among midlife individuals. PMID:28041592

  1. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  2. A vision-based system for fast and accurate laser scanning in robot-assisted phonomicrosurgery.

    PubMed

    Dagnino, Giulio; Mattos, Leonardo S; Caldwell, Darwin G

    2015-02-01

    Surgical quality in phonomicrosurgery can be improved by open-loop laser control (e.g., high-speed scanning capabilities) with a robust and accurate closed-loop visual servoing systems. A new vision-based system for laser scanning control during robot-assisted phonomicrosurgery was developed and tested. Laser scanning was accomplished with a dual control strategy, which adds a vision-based trajectory correction phase to a fast open-loop laser controller. The system is designed to eliminate open-loop aiming errors caused by system calibration limitations and by the unpredictable topology of real targets. Evaluation of the new system was performed using CO(2) laser cutting trials on artificial targets and ex-vivo tissue. This system produced accuracy values corresponding to pixel resolution even when smoke created by the laser-target interaction clutters the camera view. In realistic test scenarios, trajectory following RMS errors were reduced by almost 80 % with respect to open-loop system performances, reaching mean error values around 30 μ m and maximum observed errors in the order of 60 μ m. A new vision-based laser microsurgical control system was shown to be effective and promising with significant positive potential impact on the safety and quality of laser microsurgeries.

  3. Visual classification of medical data using MLP mapping.

    PubMed

    Cağatay Güler, E; Sankur, B; Kahya, Y P; Raudys, S

    1998-05-01

    In this work we discuss the design of a novel non-linear mapping method for visual classification based on multilayer perceptrons (MLP) and assigned class target values. In training the perceptron, one or more target output values for each class in a 2-dimensional space are used. In other words, class membership information is interpreted visually as closeness to target values in a 2D feature space. This mapping is obtained by training the multilayer perceptron (MLP) using class membership information, input data and judiciously chosen target values. Weights are estimated in such a way that each training feature of the corresponding class is forced to be mapped onto the corresponding 2-dimensional target value.

  4. Explaining the Colavita visual dominance effect.

    PubMed

    Spence, Charles

    2009-01-01

    The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.

  5. Continuous movement decoding using a target-dependent model with EMG inputs.

    PubMed

    Sachs, Nicholas A; Corbett, Elaine A; Miller, Lee E; Perreault, Eric J

    2011-01-01

    Trajectory-based models that incorporate target position information have been shown to accurately decode reaching movements from bio-control signals, such as muscle (EMG) and cortical activity (neural spikes). One major hurdle in implementing such models for neuroprosthetic control is that they are inherently designed to decode single reaches from a position of origin to a specific target. Gaze direction can be used to identify appropriate targets, however information regarding movement intent is needed to determine when a reach is meant to begin and when it has been completed. We used linear discriminant analysis to classify limb states into movement classes based on recorded EMG from a sparse set of shoulder muscles. We then used the detected state transitions to update target information in a mixture of Kalman filters that incorporated target position explicitly in the state, and used EMG activity to decode arm movements. Updating the target position initiated movement along new trajectories, allowing a sequence of appropriately timed single reaches to be decoded in series and enabling highly accurate continuous control.

  6. [How do Prevention Projects Reach their Target Groups? Results of a Survey with Prevention Projects].

    PubMed

    Brand, T; Böttcher, S; Jahn, I

    2015-12-01

     The aim of this study was to assess methods used to access target groups in prevention projects funded within the prevention research framework by the German Federal Ministry of Education and Research.  A survey with prevention projects was conducted. Access strategies, communication channels, incentives, programme reach, and successful practical recruitment strategies were explored.  38 out of 60 projects took part in the survey. Most projects accessed their target group within structured settings (e. g., child day-care centers, schools, workplaces). Multiple communication channels and incentives were used, with written information and monetary incentives being used most frequently. Only few projects were able to report their programme reach adequately; programme reach was highest for programmes accessing the target groups in structured settings. The respondents viewed active recruitment via personal communication with the target group and key persons in the settings as the most successful strategy.  The paper provides an overview on recruitment strategies used in current preven-tion projects. More systematic research on programme reach is necessary. © Georg Thieme Verlag KG Stuttgart · New York.

  7. [Eccentricity-dependent influence of amodal completion on visual search].

    PubMed

    Shirama, Aya; Ishiguchi, Akira

    2009-06-01

    Does amodal completion occur homogeneously across the visual field? Rensink and Enns (1998) found that visual search for efficiently-detected fragments became inefficient when observers perceived the fragments as a partially-occluded version of a distractor due to a rapid completion process. We examined the effect of target eccentricity in Rensink and Enns's tasks and a few additional tasks by magnifying the stimuli in the peripheral visual field to compensate for the loss of spatial resolution (M-scaling; Rovamo & Virsu, 1979). We found that amodal completion disrupted the efficient search for the salient fragments (i.e., target) even when the target was presented at high eccentricity (within 17 deg). In addition, the configuration effect of the fragments, which produced amodal completion, increased with eccentricity while the same target was detected efficiently at the lowest eccentricity. This eccentricity effect is different from a previously-reported eccentricity effect where M-scaling was effective (Carrasco & Frieder, 1997). These findings indicate that the visual system has a basis for rapid completion across the visual field, but the stimulus representations constructed through amodal completion have eccentricity-dependent properties.

  8. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching

    PubMed Central

    Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja

    2017-01-01

    Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information. PMID:28450826

  9. Scene Configuration and Object Reliability Affect the Use of Allocentric Information for Memory-Guided Reaching.

    PubMed

    Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja

    2017-01-01

    Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.

  10. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  11. Dynamic interactions between visual working memory and saccade target selection

    PubMed Central

    Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew

    2014-01-01

    Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628

  12. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation123

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying

    2016-01-01

    Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation. PMID:27092335

  13. Dynamics of different-sized solid-state nanocrystals as tracers for a drug-delivery system in the interstitium of a human tumor xenograft

    PubMed Central

    Kawai, Masaaki; Higuchi, Hideo; Takeda, Motohiro; Kobayashi, Yoshio; Ohuchi, Noriaki

    2009-01-01

    Introduction Recent anticancer drugs have been made larger to pass selectively through tumor vessels and stay in the interstitium. Understanding drug movement in association with its size at the single-molecule level and estimating the time needed to reach the targeted organ is indispensable for optimizing drug delivery because single cell-targeted therapy is the ongoing paradigm. This report describes the tracking of single solid nanoparticles in tumor xenografts and the estimation of arrival time. Methods Different-sized nanoparticles measuring 20, 40, and 100 nm were injected into the tail vein of the female Balb/c nu/nu mice bearing human breast cancer on their backs. The movements of the nanoparticles were visualized through the dorsal skin-fold chamber with the high-speed confocal microscopy that we manufactured. Results An analysis of the particle trajectories revealed diffusion to be inversely related to the particle size and position in the tumor, whereas the velocity of the directed movement was related to the position. The difference in the velocity was the greatest for 40-nm particles in the perivascular to the intercellular region: difference = 5.8 nm/s. The arrival time of individual nanoparticles at tumor cells was simulated. The estimated times for the 20-, 40-, and 100-nm particles to reach the tumor cells were 158.0, 218.5, and 389.4 minutes, respectively, after extravasation. Conclusions This result suggests that the particle size can be individually designed for each goal. These data and methods are also important for understanding drug pharmacokinetics. Although this method may be subject to interference by surface molecules attached on the particles, it has the potential to elucidate the pharmacokinetics involved in constructing novel drug-delivery systems involving cell-targeted therapy. PMID:19575785

  14. Fitts' Law in the Control of Isometric Grip Force With Naturalistic Targets.

    PubMed

    Thumser, Zachary C; Slifkin, Andrew B; Beckler, Dylan T; Marasco, Paul D

    2018-01-01

    Fitts' law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts' law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts' law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts' law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts' law (average r 2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts' law for explicit targets with vision ( r 2 = 0.96) and implicit targets ( r 2 = 0.89), but not as well-described for explicit targets without vision ( r 2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts' law to quantify the relative speed-accuracy relationship of any given grasper.

  15. Reduced Performance of Prey Targeting in Pit Vipers with Contralaterally Occluded Infrared and Visual Senses

    PubMed Central

    Chen, Qin; Deng, Huanhuan; Brauth, Steven E.; Ding, Li; Tang, Yezhong

    2012-01-01

    Both visual and infrared (IR) senses are utilized in prey targeting by pit vipers. Visual and IR inputs project to the contralateral optic tectum where they activate both multimodal and bimodal neurons. A series of ocular and pit organ occlusion experiments using the short-tailed pit viper (Gloydius brevicaudus) were conducted to investigate the role of visual and IR information during prey targeting. Compared with unoccluded controls, snakes with either both eyes or pit organs occluded performed more poorly in hunting prey although such subjects still captured prey on 75% of trials. Subjects with one eye and one pit occluded on the same side of the face performed as well as those with bilateral occlusion although these subjects showed a significant targeting angle bias toward the unoccluded side. Performance was significantly poorer when only a single eye or pit was available. Interestingly, when one eye and one pit organ were occluded on opposite sides of the face, performance was poorest, the snakes striking prey on no more than half the trials. These results indicate that, visual and infrared information are both effective in prey targeting in this species, although interference between the two modalities occurs if visual and IR information is restricted to opposite sides of the brain. PMID:22606229

  16. The forward masking effects of low-level laser glare on target location performance in a visual search task

    NASA Astrophysics Data System (ADS)

    Reddix, M. D.; Dandrea, J. A.; Collyer, P. D.

    1992-01-01

    The present study examined the effects of low-intensity laser glue, far below a level that would cause ocular damage or flashblindness, on the visually guided performance of aviators. With a forward-masking paradigm, this study showed that the time at which laser glare is experienced, relative to initial acquisition of visual information, differentially affects the speed and accuracy of target-location performance. Brief exposure (300 ms) to laser glare, terminating with a visual scene's onset, produced significant decrements in target-location performance relative to a no-glare control whereas a 150 and 300-ms delay of display onset (DDO) had very little effect. The intensity of the light entering the eye and producing these effects was far below the Maximum Permissible Exposure (MPE) limit for safe viewing of coherent light produced by an argon laser. In addition, these effects were modulated by the distance of the target from the center of the visual display. This study demonstrated that the presence of laser glare is not sufficient, in and of itself, to diminish target-location performance. The time at which laser glare is experienced is an important factor in determining the probability and extent of visually mediated performance decrements.

  17. Separate visual representations for perception and for visually guided behavior

    NASA Technical Reports Server (NTRS)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  18. The effects of bilateral presentations on lateralized lexical decision.

    PubMed

    Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran

    2007-06-01

    We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the stage of the process in which the distractor is affecting the decision about the target; and third, to determine whether the interaction between the lexicality of the target and the lexicality of the distractor ("lexical redundancy effect") is due to facilitation or inhibition of lexical processing. Unilateral and bilateral trials were presented in separate blocks. Target stimuli were always underlined. Regarding our first goal, we found that bilateral presentations (a) increased the effect of visual hemifield of presentation (right visual field advantage) for words by slowing down the processing of word targets presented to the left visual field, and (b) produced an interaction between visual hemifield of presentation (VF) and target lexicality (TLex), which implies the use of different strategies by the two hemispheres in lexical processing. For our second goal of determining the processing stage that is affected by the distractor, we introduced a third condition in which targets were always accompanied by "perceptual" distractors consisting of sequences of the letter "x" (e.g., xxxx). Performance on these trials indicated that most of the interaction occurs during lexical access (after basic perceptual analysis but before response programming). Finally, a comparison between performance patterns on the trials containing perceptual and lexical distractors indicated that the lexical redundancy effect is mainly due to inhibition of word processing by pseudoword distractors.

  19. Neural Pathways Conveying Novisual Information to the Visual Cortex

    PubMed Central

    2013-01-01

    The visual cortex has been traditionally considered as a stimulus-driven, unimodal system with a hierarchical organization. However, recent animal and human studies have shown that the visual cortex responds to non-visual stimuli, especially in individuals with visual deprivation congenitally, indicating the supramodal nature of the functional representation in the visual cortex. To understand the neural substrates of the cross-modal processing of the non-visual signals in the visual cortex, we firstly showed the supramodal nature of the visual cortex. We then reviewed how the nonvisual signals reach the visual cortex. Moreover, we discussed if these non-visual pathways are reshaped by early visual deprivation. Finally, the open question about the nature (stimulus-driven or top-down) of non-visual signals is also discussed. PMID:23840972

  20. Modulation of neuronal responses during covert search for visual feature conjunctions

    PubMed Central

    Buracas, Giedrius T.; Albright, Thomas D.

    2009-01-01

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385

  1. Modulation of neuronal responses during covert search for visual feature conjunctions.

    PubMed

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  2. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Are condom-promotion interventions reaching internal migrants in China? Integrated evidence from two cross-sectional surveys.

    PubMed

    Liu, Xiaona; Erasmus, Vicki; van Genugten, Lenneke; Sun, Xinying; Tan, Jingguang; Richardus, Jan Hendrik

    2016-09-01

    Behavioral interventions containing behavior change techniques (BCTs) that do not reach the target populations sufficiently will fail to accomplish their desired outcome. To guide sexually transmitted infection prevention policy for internal migrants in China, this study examines the extent to which BCTs aiming at increasing condom use reach the migrants and investigates the preference of the target population for these techniques among 364 migrants and 44 healthcare workers (HCWs) in Shenzhen, China. The results show that condom-promotion techniques that had been offered by HCWs to internal migrants reached a limited proportion of the population (range of reach ratio: 17.6-55.0%), although there appears to be a good match between what is offered and what is preferred by Chinese internal migrants regarding condom-promotion techniques (rank difference ≤ 1). Our findings highlight the need to increase the reach of condom-promotion techniques among Chinese internal migrants, and suggest techniques that are likely to reach the target population and match their preferred health education approaches.

  4. Timing of target discrimination in human frontal eye fields.

    PubMed

    O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent

    2004-01-01

    Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.

  5. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  6. How visual working memory contents influence priming of visual attention.

    PubMed

    Carlisle, Nancy B; Kristjánsson, Árni

    2017-04-12

    Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.

  7. Effects of Length of Retention Interval on Proactive Interference in Short-Term Visual Memory

    ERIC Educational Resources Information Center

    Meudell, Peter R.

    1977-01-01

    These experiments show two things: (a) In visual memory, long-term interference on a current item from items previously stored only seems to occur when the current item's retention interval is relatively long, and (b) the visual code appears to decay rapidly, reaching asymptote within 3 seconds of input in the presence of an interpolated task.…

  8. Global attention facilitates the planning, but not execution of goal-directed reaches.

    PubMed

    McCarthy, J Daniel; Song, Joo-Hyun

    2016-07-01

    In daily life, humans interact with multiple objects in complex environments. A large body of literature demonstrates that target selection is biased toward recently attended features, such that reaches are faster and trajectory curvature is reduced when target features (i.e., color) are repeated (priming of pop-out). In the real world, however, objects are comprised of several features-some of which may be more suitable for action than others. When fetching a mug from the cupboard, for example, attention not only has to be allocated to the object, but also the handle. To date, no study has investigated the impact of hierarchical feature organization on target selection for action. Here, we employed a color-oddity search task in which targets were Pac-men (i.e., a circle with a triangle cut out) oriented to be either consistent or inconsistent with the percept of a global Kanizsa triangle. We found that reaches were initiated faster when a task-irrelevant illusory figure was present independent of color repetition. Additionally, consistent with priming of pop-out, both reach planning and execution were facilitated when local target colors were repeated, regardless of whether a global figure was present. We also demonstrated that figures defined by illusory, but not real contours, afforded an early target selection benefit. In sum, these findings suggest that when local targets are perceptually grouped to form an illusory surface, attention quickly spreads across the global figure and facilitates the early stage of reach planning, but not execution. In contrast, local color priming is evident throughout goal-directed reaching.

  9. A Comparison of Basinwide and Representative Reach Habitat Survey Techniques in Three Southern Appalachian Watersheds

    Treesearch

    C. Andrew Dolloff; Holly E. Jennings

    1997-01-01

    We compared estimates of stream habitat at the watershed scale using the basinwide visual estimation technique (BVET) and the representative reach extrapolation technique (RRET) in three small watersheds in the Appalachian Mountains. Within each watershed, all habitat units were sampled by the BVET, in contrast, three or four 100-m reaches were sampled with the RRET....

  10. Braking reaching movements: a test of the constant tau-dot strategy under different viewing conditions.

    PubMed

    Hopkins, Brian; Churchill, Andrew; Vogt, Stefan; Rönnqvist, Louise

    2004-03-01

    Following F. Zaal and R. J. Bootsma (1995), the authors studied whether the decelerative phase of a reaching movement could be modeled as a constant tau-dot strategy resulting in a soft collision with the object. Specifically, they investigated whether that strategy is sustained over different viewing conditions. Participants (N = 11) were required to reach for 15- and 50-mm objects at 2 different distances under 3 conditions in which visual availability of the immediate environment and of the reaching hand were varied. Tau-dot estimates and goodness-of-fit were highly similar across the 3 conditions. Only within-participant variability of tau-dot estimates was increased when environmental cues were removed. That finding suggests that the motor system uses a tau-dot strategy involving the intermodal (i.e., visual, proprioceptive, or both) specification of information to regulate the decelerative phase of reaching under restricted viewing conditions. The authors provide recommendations for improving the derivation of tau;(x) estimates and stress the need for further research on how time-to-contact information is used in the regulation of the dynamics of actions such as reaching.

  11. The Crossmodal Facilitation of Visual Object Representations by Sound: Evidence from the Backward Masking Paradigm

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results…

  12. "Looking-at-nothing" during sequential sensorimotor actions: Long-term memory-based eye scanning of remembered target locations.

    PubMed

    Foerster, Rebecca M

    2018-03-01

    Before acting humans saccade to a target object to extract relevant visual information. Even when acting on remembered objects, locations previously occupied by relevant objects are fixated during imagery and memory tasks - a phenomenon called "looking-at-nothing". While looking-at-nothing was robustly found in tasks encouraging declarative memory built-up, results are mixed in the case of procedural sensorimotor tasks. Eye-guidance to manual targets in complete darkness was observed in a task practiced for days beforehand, while investigations using only a single session did not find fixations to remembered action targets. Here, it is asked whether looking-at-nothing can be found in a single sensorimotor session and thus independent from sleep consolidation, and how it progresses when visual information is repeatedly unavailable. Eye movements were investigated in a computerized version of the trail making test. Participants clicked on numbered circles in ascending sequence. Fifty trials were performed with the same spatial arrangement of 9 visual targets to enable long-term memory consolidation. During 50 consecutive trials, participants had to click the remembered target sequence on an empty screen. Participants scanned the visual targets and also the empty target locations sequentially with their eyes, however, the latter less precise than the former. Over the course of the memory trials, manual and oculomotor sequential target scanning became more similar to the visual trials. Results argue for robust looking-at-nothing during procedural sensorimotor tasks provided that long-term memory information is sufficient. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Hard-to-Reach? Using Health Access Status as a Way to More Effectively Target Segments of the Latino Audience

    ERIC Educational Resources Information Center

    Wilkin, Holley A.; Ball-Rokeach, Sandra J.

    2011-01-01

    Health issues disproportionately affect Latinos, but variations within this ethnic group may mean that some Latinos are harder to reach with health messages than others. This paper introduces a methodology grounded in communication infrastructure theory to better target "hard-to-reach" audiences. A random digit dialing telephone survey…

  14. The Role of Motor Learning in Spatial Adaptation near a Tool

    PubMed Central

    Brown, Liana E.; Doole, Robert; Malfait, Nicole

    2011-01-01

    Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented. PMID:22174944

  15. Evidence for an attentional component of inhibition of return in visual search.

    PubMed

    Pierce, Allison M; Crouse, Monique D; Green, Jessica J

    2017-11-01

    Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.

  16. The Generalization of Visuomotor Learning to Untrained Movements and Movement Sequences Based on Movement Vector and Goal Location Remapping

    PubMed Central

    Wu, Howard G.

    2013-01-01

    The planning of goal-directed movements is highly adaptable; however, the basic mechanisms underlying this adaptability are not well understood. Even the features of movement that drive adaptation are hotly debated, with some studies suggesting remapping of goal locations and others suggesting remapping of the movement vectors leading to goal locations. However, several previous motor learning studies and the multiplicity of the neural coding underlying visually guided reaching movements stand in contrast to this either/or debate on the modes of motor planning and adaptation. Here we hypothesize that, during visuomotor learning, the target location and movement vector of trained movements are separately remapped, and we propose a novel computational model for how motor plans based on these remappings are combined during the control of visually guided reaching in humans. To test this hypothesis, we designed a set of experimental manipulations that effectively dissociated the effects of remapping goal location and movement vector by examining the transfer of visuomotor adaptation to untrained movements and movement sequences throughout the workspace. The results reveal that (1) motor adaptation differentially remaps goal locations and movement vectors, and (2) separate motor plans based on these features are effectively averaged during motor execution. We then show that, without any free parameters, the computational model we developed for combining movement-vector-based and goal-location-based planning predicts nearly 90% of the variance in novel movement sequences, even when multiple attributes are simultaneously adapted, demonstrating for the first time the ability to predict how motor adaptation affects movement sequence planning. PMID:23804099

  17. Interaction between gaze and visual and proprioceptive position judgements.

    PubMed

    Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P

    2010-06-01

    There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.

  18. Proprioceptive loss and the perception, control and learning of arm movements in humans: evidence from sensory neuronopathy.

    PubMed

    Miall, R Chris; Kitchen, Nick M; Nam, Se-Ho; Lefumat, Hannah; Renault, Alix G; Ørstavik, Kristin; Cole, Jonathan D; Sarlegna, Fabrice R

    2018-05-19

    It is uncertain how vision and proprioception contribute to adaptation of voluntary arm movements. In normal participants, adaptation to imposed forces is possible with or without vision, suggesting that proprioception is sufficient; in participants with proprioceptive loss (PL), adaptation is possible with visual feedback, suggesting that proprioception is unnecessary. In experiment 1 adaptation to, and retention of, perturbing forces were evaluated in three chronically deafferented participants. They made rapid reaching movements to move a cursor toward a visual target, and a planar robot arm applied orthogonal velocity-dependent forces. Trial-by-trial error correction was observed in all participants. Such adaptation has been characterized with a dual-rate model: a fast process that learns quickly, but retains poorly and a slow process that learns slowly and retains well. Experiment 2 showed that the PL participants had large individual differences in learning and retention rates compared to normal controls. Experiment 3 tested participants' perception of applied forces. With visual feedback, the PL participants could report the perturbation's direction as well as controls; without visual feedback, thresholds were elevated. Experiment 4 showed, in healthy participants, that force direction could be estimated from head motion, at levels close to the no-vision threshold for the PL participants. Our results show that proprioceptive loss influences perception, motor control and adaptation but that proprioception from the moving limb is not essential for adaptation to, or detection of, force fields. The differences in learning and retention seen between the three deafferented participants suggest that they achieve these tasks in idiosyncratic ways after proprioceptive loss, possibly integrating visual and vestibular information with individual cognitive strategies.

  19. Alteration of a motor learning rule under mirror-reversal transformation does not depend on the amplitude of visual error.

    PubMed

    Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi

    2015-05-01

    Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  20. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  1. Multiple Motor Learning Strategies in Visuomotor Rotation

    PubMed Central

    Saijo, Naoki; Gomi, Hiroaki

    2010-01-01

    Background When exposed to a continuous directional discrepancy between movements of a visible hand cursor and the actual hand (visuomotor rotation), subjects adapt their reaching movements so that the cursor is brought to the target. Abrupt removal of the discrepancy after training induces reaching error in the direction opposite to the original discrepancy, which is called an aftereffect. Previous studies have shown that training with gradually increasing visuomotor rotation results in a larger aftereffect than with a suddenly increasing one. Although the aftereffect difference implies a difference in the learning process, it is still unclear whether the learned visuomotor transformations are qualitatively different between the training conditions. Methodology/Principal Findings We examined the qualitative changes in the visuomotor transformation after the learning of the sudden and gradual visuomotor rotations. The learning of the sudden rotation led to a significant increase of the reaction time for arm movement initiation and then the reaching error decreased, indicating that the learning is associated with an increase of computational load in motor preparation (planning). In contrast, the learning of the gradual rotation did not change the reaction time but resulted in an increase of the gain of feedback control, suggesting that the online adjustment of the reaching contributes to the learning of the gradual rotation. When the online cursor feedback was eliminated during the learning of the gradual rotation, the reaction time increased, indicating that additional computations are involved in the learning of the gradual rotation. Conclusions/Significance The results suggest that the change in the motor planning and online feedback adjustment of the movement are involved in the learning of the visuomotor rotation. The contributions of those computations to the learning are flexibly modulated according to the visual environment. Such multiple learning strategies would be required for reaching adaptation within a short training period. PMID:20195373

  2. Image guided neuroendoscopy for third ventriculostomy.

    PubMed

    Broggi, G; Dones, I; Ferroli, P; Franzini, A; Servello, D; Duca, S

    2000-01-01

    Third ventriculostomy has become an increasing popular procedure for the treatment of hydrocephalus of different aetiologies. Between october 1997 and october 1998, 17 patients (12 females, 5 males; 12-82 year-old; mean age 43) underwent image-assisted endoscopic third ventriculostomy for hydrocephalus at the Istituto Nazionale Neurologico "C.Besta" of Milano. There was no mortality and no long term morbidity. Neuronavigation has been found useful in selecting the safest trajectory to the target avoiding any traction on the foramen of Monro related structures and allowing the necessary mobility for fine adjustments under visual and "tactile" control when choosing the safest point to perform the stoma. According to our experience neuro-endoscopy and neuronavigation seems to be complementary in reaching easy, safe and successful results in the treatment of hydrocephalus of different origins.

  3. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning

    PubMed Central

    McGregor, Heather R.; Mohatarem, Ayman

    2017-01-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634

  4. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.

    PubMed

    Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L

    2017-07-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.

  5. When canary primes yellow: effects of semantic memory on overt attention.

    PubMed

    Léger, Laure; Chauvet, Elodie

    2015-02-01

    This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.

  6. Evidence for Deficits in the Temporal Attention Span of Poor Readers

    PubMed Central

    Visser, Troy A. W.

    2014-01-01

    Background While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their “temporal attention span” – that is, their ability to rapidly and accurately process sequences of consecutive target items. Methodology/Principal Findings Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Conclusions/Significance Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span. PMID:24651313

  7. Evidence for deficits in the temporal attention span of poor readers.

    PubMed

    Visser, Troy A W

    2014-01-01

    While poor reading is often associated with phonological deficits, many studies suggest that visual processing might also be impaired. In particular, recent research has indicated that poor readers show impaired spatial visual attention spans in partial and whole report tasks. Given the similarities between competition-based accounts for reduced visual attention span and similar explanations for impairments in sequential object processing, the present work examined whether poor readers show deficits in their "temporal attention span"--that is, their ability to rapidly and accurately process sequences of consecutive target items. Poor and normal readers monitored a sequential stream of visual items for two (TT condition) or three (TTT condition) consecutive target digits. Target identification was examined using both unconditional and conditional measures of accuracy in order to gauge the overall likelihood of identifying a target and the likelihood of identifying a target given successful identification of previous items. Compared to normal readers, poor readers showed small but consistent deficits in identification across targets whether unconditional or conditional accuracy was used. Additionally, in the TTT condition, final-target conditional accuracy was poorer than unconditional accuracy, particularly for poor readers, suggesting a substantial cost arising from processing the previous two targets that was not present in normal readers. Mirroring the differences found between poor and normal readers in spatial visual attention span, the present findings suggest two principal differences between the temporal attention spans of poor and normal readers. First, the consistent pattern of reduced performance across targets suggests increased competition amongst items within the same span for poor readers. Second, the steeper decline in final target performance amongst poor readers in the TTT condition suggests a reduction in the extent of their temporal attention span.

  8. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions.

    PubMed

    Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco

    2004-04-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.

  9. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    PubMed

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  10. Infection with an acanthocephalan manipulates an amphipod's reaction to a fish predator's odours.

    PubMed

    Baldauf, Sebastian A; Thünken, Timo; Frommen, Joachim G; Bakker, Theo C M; Heupel, Oliver; Kullmann, Harald

    2007-01-01

    Many parasites with complex life cycles increase the chances of reaching a final host by adapting strategies to manipulate their intermediate host's appearance, condition or behaviour. The acanthocephalan parasite Pomphorhynchus laevis uses freshwater amphipods as intermediate hosts before reaching sexual maturity in predatory fish. We performed a series of choice experiments with infected and uninfected Gammarus pulex in order to distinguish between the effects of visual and olfactory predator cues on parasite-induced changes in host behaviour. When both visual and olfactory cues, as well as only olfactory cues were offered, infected and uninfected G. pulex showed significantly different preferences for the predator or the non-predator side. Uninfected individuals significantly avoided predator odours while infected individuals significantly preferred the side with predator odours. When only visual contact with a predator was allowed, infected and uninfected gammarids behaved similarly and had no significant preference. Thus, we believe we show for the first time that P. laevis increases its chance to reach a final host by olfactory-triggered manipulation of the anti-predator behaviour of its intermediate host.

  11. Monitoring Processes in Visual Search Enhanced by Professional Experience: The Case of Orange Quality-Control Workers

    PubMed Central

    Visalli, Antonino; Vallesi, Antonino

    2018-01-01

    Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages. PMID:29497392

  12. The frontal eye fields limit the capacity of visual short-term memory in rhesus monkeys.

    PubMed

    Lee, Kyoung-Min; Ahn, Kyung-Ha

    2013-01-01

    The frontal eye fields (FEF) in rhesus monkeys have been implicated in visual short-term memory (VSTM) as well as control of visual attention. Here we examined the importance of the area in the VSTM capacity and the relationship between VSTM and attention, using the chemical inactivation technique and multi-target saccade tasks with or without the need of target-location memory. During FEF inactivation, serial saccades to targets defined by color contrast were unaffected, but saccades relying on short-term memory were impaired when the target count was at the capacity limit of VSTM. The memory impairment was specific to the FEF-coded retinotopic locations, and subject to competition among targets distributed across visual fields. These results together suggest that the FEF plays a crucial role during the entry of information into VSTM, by enabling attention deployment on targets to be remembered. In this view, the memory capacity results from the limited availability of attentional resources provided by FEF: The FEF can concurrently maintain only a limited number of activations to register the targets into memory. When lesions render part of the area unavailable for activation, the number would decrease, further reducing the capacity of VSTM.

  13. History effects in visual search for monsters: search times, choice biases, and liking.

    PubMed

    Chetverikov, Andrey; Kristjansson, Árni

    2015-02-01

    Repeating targets and distractors on consecutive visual search trials facilitates search performance, whereas switching targets and distractors harms search. In addition, search repetition leads to biases in free choice tasks, in that previously attended targets are more likely to be chosen than distractors. Another line of research has shown that attended items receive high liking ratings, whereas ignored distractors are rated negatively. Potential relations between the three effects are unclear, however. Here we simultaneously measured repetition benefits and switching costs for search times, choice biases, and liking ratings in color singleton visual search for "monster" shapes. We showed that if expectations from search repetition are violated, targets are liked to be less attended than otherwise. Choice biases were, on the other hand, affected by distractor repetition, but not by target/distractor switches. Target repetition speeded search times but had little influence on choice or liking. Our findings suggest that choice biases reflect distractor inhibition, and liking reflects the conflict associated with attending to previously inhibited stimuli, while speeded search follows both target and distractor repetition. Our results support the newly proposed affective-feedback-of-hypothesis-testing account of cognition, and additionally, shed new light on the priming of visual search.

  14. Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.

    PubMed

    Leonardo, Anthony; Meister, Markus

    2013-10-23

    A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.

  15. The Role of Color in Search Templates for Real-world Target Objects.

    PubMed

    Nako, Rebecca; Smith, Tim J; Eimer, Martin

    2016-11-01

    During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.

  16. Contextual cueing: implicit learning and memory of visual context guides spatial attention.

    PubMed

    Chun, M M; Jiang, Y

    1998-06-01

    Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.

  17. Reaching High-Need Youth Populations With Evidence-Based Sexual Health Education in California.

    PubMed

    Campa, Mary I; Leff, Sarah Z; Tufts, Margaret

    2018-02-01

    To explore the programmatic reach and experience of high-need adolescents who received sexual health education in 3 distinct implementation settings (targeted-prevention settings, traditional schools, and alternative schools) through a statewide sexual health education program. Data are from youth surveys collected between September 2013 and December 2014 in the California Personal Responsibility Education Program. A sample of high-need participants (n = 747) provided data to examine the impact of implementation setting on reach and program experience. Implementation in targeted-prevention settings was equal to or more effective at providing a positive program experience for high-need participants. More than 5 times as many high-need participants were served in targeted-prevention settings compared with traditional schools. Reaching the same number of high-need participants served in targeted-prevention settings over 15 months would take nearly 7 years of programming in traditional schools. To maximize the reach and experience of high-need youth populations receiving sexual health education, state and local agencies should consider the importance of implementation setting. Targeted resources and efforts should be directed toward high-need young people by expanding beyond traditional school settings.

  18. The Visual System's Intrinsic Bias and Knowledge of Size Mediate Perceived Size and Location in the Dark

    ERIC Educational Resources Information Center

    Zhou, Liu; He, Zijiang J.; Ooi, Teng Leng

    2013-01-01

    Dimly lit targets in the dark are perceived as located about an implicit slanted surface that delineates the visual system's intrinsic bias (Ooi, Wu, & He, 2001). If the intrinsic bias reflects the internal model of visual space--as proposed here--its influence should extend beyond target localization. Our first 2 experiments demonstrated that…

  19. Development of Visual Selection in 3- to 9-Month-Olds: Evidence from Saccades to Previously Ignored Locations

    ERIC Educational Resources Information Center

    Amso, Dima; Johnson, Scott P.

    2008-01-01

    We examined changes in the efficiency of visual selection over the first postnatal year with an adapted version of a "spatial negative priming" paradigm. In this task, when a previously ignored location becomes the target to be selected, responses to it are impaired, providing a measure of visual selection. Oculomotor latencies to target selection…

  20. Interventions for visual field defects in patients with stroke.

    PubMed

    Pollock, Alex; Hazelton, Christine; Henderson, Clair A; Angilley, Jayne; Dhillon, Baljean; Langhorne, Peter; Livingstone, Katrina; Munro, Frank A; Orr, Heather; Rowe, Fiona J; Shahani, Uma

    2011-10-05

    Visual field defects are estimated to affect 20% to 57% of people who have had a stroke. Visual field defects can affect functional ability in activities of daily living (commonly affecting mobility, reading and driving), quality of life, ability to participate in rehabilitation, and depression, anxiety and social isolation following stroke. There are many interventions for visual field defects, which are proposed to work by restoring the visual field (restitution); compensating for the visual field defect by changing behaviour or activity (compensation); substituting for the visual field defect by using a device or extraneous modification (substitution); or ensuring appropriate diagnosis, referral and treatment prescription through standardised assessment or screening, or both. To determine the effects of interventions for people with visual field defects after stroke. We searched the Cochrane Stroke Group Trials Register (February 2011), the Cochrane Eyes and Vision Group Trials Register (December 2009) and nine electronic bibliographic databases including CENTRAL (The Cochrane Library 2009, Issue 4), MEDLINE (1950 to December 2009), EMBASE (1980 to December 2009), CINAHL (1982 to December 2009), AMED (1985 to December 2009), and PsycINFO (1967 to December 2009). We also searched reference lists and trials registers, handsearched journals and conference proceedings and contacted experts. Randomised trials in adults after stroke, where the intervention was specifically targeted at improving the visual field defect or improving the ability of the participant to cope with the visual field loss. The primary outcome was functional ability in activities of daily living and secondary outcomes included functional ability in extended activities of daily living, reading ability, visual field measures, balance, falls, depression and anxiety, discharge destination or residence after stroke, quality of life and social isolation, visual scanning, adverse events and death. Two review authors independently screened abstracts, extracted data and appraised trials. We undertook an assessment of methodological quality for allocation concealment, blinding of outcome assessors, method of dealing with missing data, and other potential sources of bias. Thirteen studies (344 randomised participants, 285 of whom were participants with stroke) met the inclusion criteria for this review. However, only six of these studies compared the effect of an intervention with a placebo, control or no treatment group and were included in comparisons within this review. Four studies compared the effect of scanning (compensatory) training with a control or placebo intervention. Meta-analysis demonstrated that scanning training is more effective than control or placebo at improving reading ability (three studies, 129 participants; mean difference (MD) 3.24, 95% confidence interval (CI) 0.84 to 5.59) and visual scanning (three studies, 129 participants; MD 18.84, 95% CI 12.01 to 25.66) but that scanning may not improve visual field outcomes (two studies, 110 participants; MD -0.70, 95% CI -2.28 to 0.88). There were insufficient data to enable generalised conclusions to be made about the effectiveness of scanning training relative to control or placebo for the primary outcome of activities of daily living (one study, 33 participants). Only one study (19 participants) compared the effect of a restitutive intervention with a control or placebo intervention and only one study (39 participants) compared the effect of a substitutive intervention with a control or placebo intervention. There is limited evidence which supports the use of compensatory scanning training for patients with visual field defects (and possibly co-existing visual neglect) to improve scanning and reading outcomes. There is insufficient evidence to reach a conclusion about the impact of compensatory scanning training on functional activities of daily living. There is insufficient evidence to reach generalised conclusions about the benefits of visual restitution training (VRT) (restitutive intervention) or prisms (substitutive intervention) for patients with visual field defects after stroke.

  1. What Top-Down Task Sets Do for Us: An ERP Study on the Benefits of Advance Preparation in Visual Search

    ERIC Educational Resources Information Center

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-01-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…

  2. Effects of Speed and Visual-Target Distance on Toe Trajectory During the Swing Phase of Treadmill Walking

    NASA Technical Reports Server (NTRS)

    Miller, Christopher A.; Feiveson, Al; Bloomberg, Jacob J.

    2007-01-01

    Toe trajectory during swing phase is a precise motor control task that can provide insights into the sensorimotor control of the legs. The purpose of this study was to determine changes in vertical toe trajectory during treadmill walking due to changes in walking speed and target distance. For each trial, subjects walked on a treadmill at one of five speeds while performing a dynamic visual acuity task at either a far or near target distance (five speeds two targets distances = ten trials). Toe clearance decreased with increasing speed, and the vertical toe peak just before heel strike increased with increasing speed, regardless of target distance. The vertical toe peak just after toe-off was lower during near-target visual acuity tasks than during far-target tasks, but was not affected by speed. The ankle of the swing leg appeared to be the main joint angle that significantly affected all three toe trajectory events. The foot angle of the swing leg significantly affected toe clearance and the toe peak just before heel strike. These results will be used to enhance the analysis of lower limb kinematics during the sensorimotor treadmill testing, where differing speeds and/or visual target distances may be used.

  3. Reaching and Teaching: A Study in Audience Targeting.

    ERIC Educational Resources Information Center

    Ritter, Ellen M.; Welch, Diane T.

    1988-01-01

    Describes a project conducted by the Texas Agricultural Extension Service to market the Family Day Home Care Providers Program to an unknown clientele. Discusses the problems involved in identifying and reaching the target audience. (JOW)

  4. Decreased visual detection during subliminal stimulation.

    PubMed

    Bareither, Isabelle; Villringer, Arno; Busch, Niko A

    2014-10-17

    What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.

  5. The use of head/eye-centered, hand-centered and allocentric representations for visually guided hand movements and perceptual judgments.

    PubMed

    Thaler, Lore; Todd, James T

    2009-04-01

    Two experiments are reported that were designed to measure the accuracy and reliability of both visually guided hand movements (Exp. 1) and perceptual matching judgments (Exp. 2). The specific procedure for informing subjects of the required response on each trial was manipulated so that some tasks could only be performed using an allocentric representation of the visual target; others could be performed using either an allocentric or hand-centered representation; still others could be performed based on an allocentric, hand-centered or head/eye-centered representation. Both head/eye and hand centered representations are egocentric because they specify visual coordinates with respect to the subject. The results reveal that accuracy and reliability of both motor and perceptual responses are highest when subjects direct their response towards a visible target location, which allows them to rely on a representation of the target in head/eye-centered coordinates. Systematic changes in averages and standard deviations of responses are observed when subjects cannot direct their response towards a visible target location, but have to represent target distance and direction in either hand-centered or allocentric visual coordinates instead. Subjects' motor and perceptual performance agree quantitatively well. These results strongly suggest that subjects process head/eye-centered representations differently from hand-centered or allocentric representations, but that they process visual information for motor actions and perceptual judgments together.

  6. Fine Motor Skill Mediates Visual Memory Ability with Microstructural Neuro-correlates in Cerebellar Peduncles in Prematurely Born Adolescents.

    PubMed

    Thomas, Alyssa R; Lacadie, Cheryl; Vohr, Betty; Ment, Laura R; Scheinost, Dustin

    2017-01-01

    Adolescents born preterm (PT) with no evidence of neonatal brain injury are at risk of deficits in visual memory and fine motor skills that diminish academic performance. The association between these deficits and white matter microstructure is relatively unexplored. We studied 190 PTs with no brain injury and 92 term controls at age 16 years. The Rey-Osterrieth Complex Figure Test (ROCF), the Beery visual-motor integration (VMI), and the Grooved Pegboard Test (GPT) were collected for all participants, while a subset (40 PTs and 40 terms) underwent diffusion-weighted magnetic resonance imaging. PTs performed more poorly than terms on ROCF, VMI, and GPT (all P < 0.01). Mediation analysis showed fine motor skill (GPT score) significantly mediates group difference in ROCF and VMI (all P < 0.001). PTs showed a negative correlation (P < 0.05, corrected) between fractional anisotropy (FA) in the bilateral middle cerebellar peduncles and GPT score, with higher FA correlating to lower (faster task completion) GPT scores, and between FA in the right superior cerebellar peduncle and ROCF scores. PTs also had a positive correlation (P < 0.05, corrected) between VMI and left middle cerebellar peduncle FA. Novel strategies to target fine motor skills and the cerebellum may help PTs reach their full academic potential. © The Author 2017. Published by Oxford University Press.

  7. Reaching nearby sources: comparison between real and virtual sound and visual targets

    PubMed Central

    Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.

    2014-01-01

    Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855

  8. Spontaneous resolution of pituitary apoplexy in a giant boy under 10 years old.

    PubMed

    Chentli, Farida; Bey, Abderrahim; Belhimer, Faiza; Azzoug, Said

    2012-01-01

    Pituitary gigantism is a very rare condition; the occurrence of pituitary apoplexy in children younger than 10 years old is even rarer. The aim of our study is to report this exceptional association. A boy aged 9 years and 6 months was hospitalized for the first time in November 2011 for symptoms suggesting pituitary apoplexy. The onset of his disease was difficult to determine as his health record has been poorly maintained. On October 10, 2011, he presented to an emergency department with a sudden drop of visual acuity with diplopia and retro-orbital headaches. An ophthalmological exam found very low visual acuity (1/20) with papillary edema. An MRI of the patient's brain revealed a hemorrhagic pituitary process reaching the chiasma, which was compressed, especially on the right side. Thereafter, the patient's vision improved spontaneously. Clinical examination was normal except for gigantism (+5 SD compared to the target stature). Hormonal assessment argued for mixed secretion [growth hormone (GH) = 39 ng/mL, n ≤ 5, prolactin ( PRL) = 470 ng/mL, n < 15]. Other pituitary functions were normal. Visual acuity normalized after 2 months, and an MRI showed a spontaneous reduction of the pituitary tumor. This unusual observation is a model of symptomatic pituitary apoplexy with spontaneous resolution in a boy with pituitary gigantism: phenomenon quite exceptional and worth to be reported.

  9. Visually directed vs. software-based targeted biopsy compared to transperineal template mapping biopsy in the detection of clinically significant prostate cancer.

    PubMed

    Valerio, Massimo; McCartan, Neil; Freeman, Alex; Punwani, Shonit; Emberton, Mark; Ahmed, Hashim U

    2015-10-01

    Targeted biopsy based on cognitive or software magnetic resonance imaging (MRI) to transrectal ultrasound registration seems to increase the detection rate of clinically significant prostate cancer as compared with standard biopsy. However, these strategies have not been directly compared against an accurate test yet. The aim of this study was to obtain pilot data on the diagnostic ability of visually directed targeted biopsy vs. software-based targeted biopsy, considering transperineal template mapping (TPM) biopsy as the reference test. Prospective paired cohort study included 50 consecutive men undergoing TPM with one or more visible targets detected on preoperative multiparametric MRI. Targets were contoured on the Biojet software. Patients initially underwent software-based targeted biopsies, then visually directed targeted biopsies, and finally systematic TPM. The detection rate of clinically significant disease (Gleason score ≥3+4 and/or maximum cancer core length ≥4mm) of one strategy against another was compared by 3×3 contingency tables. Secondary analyses were performed using a less stringent threshold of significance (Gleason score ≥4+3 and/or maximum cancer core length ≥6mm). Median age was 68 (interquartile range: 63-73); median prostate-specific antigen level was 7.9ng/mL (6.4-10.2). A total of 79 targets were detected with a mean of 1.6 targets per patient. Of these, 27 (34%), 28 (35%), and 24 (31%) were scored 3, 4, and 5, respectively. At a patient level, the detection rate was 32 (64%), 34 (68%), and 38 (76%) for visually directed targeted, software-based biopsy, and TPM, respectively. Combining the 2 targeted strategies would have led to detection rate of 39 (78%). At a patient level and at a target level, software-based targeted biopsy found more clinically significant diseases than did visually directed targeted biopsy, although this was not statistically significant (22% vs. 14%, P = 0.48; 51.9% vs. 44.3%, P = 0.24). Secondary analysis showed similar results. Based on these findings, a paired cohort study enrolling at least 257 men would verify whether this difference is statistically significant. The diagnostic ability of software-based targeted biopsy and visually directed targeted biopsy seems almost comparable, although utility and efficiency both seem to be slightly in favor of the software-based strategy. Ongoing trials are sufficiently powered to prove or disprove these findings. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Involuntary orienting of attention to a sound desynchronizes the occipital alpha rhythm and improves visual perception.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2017-04-15

    Directing attention voluntarily to the location of a visual target results in an amplitude reduction (desynchronization) of the occipital alpha rhythm (8-14Hz), which is predictive of improved perceptual processing of the target. Here we investigated whether modulations of the occipital alpha rhythm triggered by the involuntary orienting of attention to a salient but spatially non-predictive sound would similarly influence perception of a subsequent visual target. Target discrimination was more accurate when a sound preceded the target at the same location (validly cued trials) than when the sound was on the side opposite to the target (invalidly cued trials). This behavioral effect was accompanied by a sound-induced desynchronization of the alpha rhythm over the lateral occipital scalp. The magnitude of alpha desynchronization over the hemisphere contralateral to the sound predicted correct discriminations of validly cued targets but not of invalidly cued targets. These results support the conclusion that cue-induced alpha desynchronization over the occipital cortex is a manifestation of a general priming mechanism that improves visual processing and that this mechanism can be activated either by the voluntary or involuntary orienting of attention. Further, the observed pattern of alpha modulations preceding correct and incorrect discriminations of valid and invalid targets suggests that involuntary orienting to the non-predictive sound has a rapid and purely facilitatory influence on processing targets on the cued side, with no inhibitory influence on targets on the opposite side. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Coordinated Flexibility: How Initial Gaze Position Modulates Eye-Hand Coordination and Reaching

    ERIC Educational Resources Information Center

    Adam, Jos J.; Buetti, Simona; Kerzel, Dirk

    2012-01-01

    Reaching to targets in space requires the coordination of eye and hand movements. In two experiments, we recorded eye and hand kinematics to examine the role of gaze position at target onset on eye-hand coordination and reaching performance. Experiment 1 showed that with eyes and hand aligned on the same peripheral start location, time lags…

  12. A unique role of endogenous visual-spatial attention in rapid processing of multiple targets

    PubMed Central

    Guzman, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru

    2012-01-01

    Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions). We report that endogenous attention uniquely contributes to processing of multiple targets. For speeded visual discrimination, response times are faster for multiple redundant targets than for single targets due to probability summation and/or signal integration. This redundancy gain was unaffected when attention was exogenously diverted from the targets, but was completely eliminated when attention was endogenously diverted. This was not due to weaker manipulation of exogenous attention because our exogenous and endogenous cues similarly affected overall response times. Thus, whereas exogenous attention is superior for processing single targets, endogenous attention plays a unique role in allocating resources crucial for rapid concurrent processing of multiple targets. PMID:21517209

  13. Effects of Background Lighting Color and Movement Distance on Reaching Times Among Participants With Low Vision, Myopia, and Normal Vision.

    PubMed

    Chen, Chun-Fu; Huang, Kuo-Chen

    2016-04-01

    This study investigated the effects of target distance (30, 35, and 40 cm) and the color of background lighting (red, green, blue, and yellow) on the duration of movements made by participants with low vision, myopia, and normal vision while performing a reaching task; 48 students (21 women, 27 men; M age = 21.8 year, SD = 2.4) participated in the study. Participants reached for a target (a white LED light) whose vertical position varied randomly across trials, ranging in distance from 30 to 40 cm. Movement time was analyzed using a 3 (participant group) × [4 (color of background lighting) × 3 (movement distance)] mixed-design ANOVA model. Results indicated longer times for completing a reaching movement when: participants belonged to the low vision group; the target distance between the starting position and the target position was longer (40 cm); and the reaching movement occurred in the red-background lighting condition. These results are particularly relevant for situations in which a user is required to respond to a signal by reaching toward a button or an icon. © The Author(s) 2016.

  14. Trajectories of attentional development: an exploration with the master activation map model.

    PubMed

    Michael, George A; Lété, Bernard; Ducrot, Stéphanie

    2013-04-01

    The developmental trajectories of several attention components, such as orienting, inhibition, and the guidance of selection by relevance (i.e., advance knowledge relevant to the task) were investigated in 498 participants (ages 7, 8, 9, 10, 11, and 20). The paradigm was based on Michael et al.'s (2006) master activation map model and consisted of 3 visual search tasks presented in an intrasubject Latin square design and differing in terms of the probability with which a salient signal was associated with the target or a distractor. The results suggest that, whereas computations of salience were already proficient at age 7, and the use of advance knowledge was efficient throughout childhood, albeit without reaching adult levels, the integration of salience and relevance reached its asymptotic level at age 8. Although moving and engaging attention was proficient at age 7, disengaging attention started to improve at age 9, reaching its adult level at age 11. As regards inhibition of salient distractors, the authors found no developmental pattern before adulthood, regardless of whether advance knowledge was available about the distractor or not, although all participants were able to use such knowledge to reduce overall interference. Finally, some results suggest that the control of resources for strengthening inhibition becomes efficient between ages 9 and 10. The developmental trajectories were compared with the existing literature and discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  15. Investigating the role of the superior colliculus in active vision with the visual search paradigm.

    PubMed

    Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin

    2011-06-01

    We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  16. Comparison of onboard low-field magnetic resonance imaging versus onboard computed tomography for anatomy visualization in radiotherapy.

    PubMed

    Noel, Camille E; Parikh, Parag J; Spencer, Christopher R; Green, Olga L; Hu, Yanle; Mutic, Sasa; Olsen, Jeffrey R

    2015-01-01

    Onboard magnetic resonance imaging (OB-MRI) for daily localization and adaptive radiotherapy has been under development by several groups. However, no clinical studies have evaluated whether OB-MRI improves visualization of the target and organs at risk (OARs) compared to standard onboard computed tomography (OB-CT). This study compared visualization of patient anatomy on images acquired on the MRI-(60)Co ViewRay system to those acquired with OB-CT. Fourteen patients enrolled on a protocol approved by the Institutional Review Board (IRB) and undergoing image-guided radiotherapy for cancer in the thorax (n = 2), pelvis (n = 6), abdomen (n = 3) or head and neck (n = 3) were imaged with OB-MRI and OB-CT. For each of the 14 patients, the OB-MRI and OB-CT datasets were displayed side-by-side and independently reviewed by three radiation oncologists. Each physician was asked to evaluate which dataset offered better visualization of the target and OARs. A quantitative contouring study was performed on two abdominal patients to assess if OB-MRI could offer improved inter-observer segmentation agreement for adaptive planning. In total 221 OARs and 10 targets were compared for visualization on OB-MRI and OB-CT by each of the three physicians. The majority of physicians (two or more) evaluated visualization on MRI as better for 71% of structures, worse for 10% of structures, and equivalent for 14% of structures. 5% of structures were not visible on either. Physicians agreed unanimously for 74% and in majority for > 99% of structures. Targets were better visualized on MRI in 4/10 cases, and never on OB-CT. Low-field MR provides better anatomic visualization of many radiotherapy targets and most OARs as compared to OB-CT. Further studies with OB-MRI should be pursued.

  17. Visual Temporal Filtering and Intermittent Visual Displays.

    DTIC Science & Technology

    1986-08-08

    suport Mud Kaplan, Associate Professor, 20% time and effort Michelangelo ROssetto, Research Associate, 20% time and m4pport Margo Greene, Research...reached and are described as follows. The variable raster rate display was designed and built by Michelangelo R0ssetto and Norman Milkman, Research

  18. Systematic distortions of perceptual stability investigated using immersive virtual reality

    PubMed Central

    Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew

    2010-01-01

    Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248

  19. Prediction in a visual language: real-time sentence processing in American Sign Language across development.

    PubMed

    Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I

    2018-01-01

    Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

  20. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-16

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time.

  1. Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo

    PubMed Central

    El-Boustani, Sami; Sur, Mriganka

    2014-01-01

    In the visual cortex, inhibitory neurons alter the computations performed by target cells via combination of two fundamental operations, division and subtraction. The origins of these operations have been variously ascribed to differences in neuron classes, synapse location or receptor conductances. Here, by utilizing specific visual stimuli and single optogenetic probe pulses, we show that the function of parvalbumin-expressing and somatostatin-expressing neurons in mice in vivo is governed by the overlap of response timing between these neurons and their targets. In particular, somatostatin-expressing neurons respond at longer latencies to small visual stimuli compared with their target neurons and provide subtractive inhibition. With large visual stimuli, however, they respond at short latencies coincident with their target cells and switch to provide divisive inhibition. These results indicate that inhibition mediated by these neurons is a dynamic property of cortical circuits rather than an immutable property of neuronal classes. PMID:25504329

  2. Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Rothkopf, Constantin; Hayhoe, Mary

    2013-01-01

    Despite general agreement that prediction is a central aspect of perception, there is relatively little evidence concerning the basis on which visual predictions are made. Although both saccadic and pursuit eye-movements reveal knowledge of the future position of a moving visual target, in many of these studies targets move along simple trajectories through a fronto-parallel plane. Here, using a naturalistic and racquet-based interception task in a virtual environment, we demonstrate that subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions and the head and body are unrestrained. Furthermore, we found that, following a change in ball elasticity, subjects were able to accurately adjust their prebounce predictions of the ball's post-bounce trajectory. This suggests that prediction is guided by experience-based models of how information in the visual image will change over time. PMID:23325347

  3. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning

    PubMed Central

    Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours. PMID:26963919

  4. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    PubMed

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor behaviours.

  5. Recruitment of Foveal Retinotopic Cortex During Haptic Exploration of Shapes and Actions in the Dark.

    PubMed

    Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C

    2017-11-29

    The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.

  6. Effects of Temporal Integration on the Shape of Visual Backward Masking Functions

    ERIC Educational Resources Information Center

    Francis, Gregory; Cho, Yang Seok

    2008-01-01

    Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be…

  7. The effect of saccade metrics on the corollary discharge contribution to perceived eye location

    PubMed Central

    Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.

    2015-01-01

    Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955

  8. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  9. Remote sensing image ship target detection method based on visual attention model

    NASA Astrophysics Data System (ADS)

    Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong

    2017-11-01

    The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.

  10. When do I quit? The search termination problem in visual search.

    PubMed

    Wolfe, Jeremy M

    2012-01-01

    In visual search tasks, observers look for targets in displays or scenes containing distracting, non-target items. Most of the research on this topic has concerned the finding of those targets. Search termination is a less thoroughly studied topic. When is it time to abandon the current search? The answer is fairly straight forward when the one and only target has been found (There are my keys.). The problem is more vexed if nothing has been found (When is it time to stop looking for a weapon at the airport checkpoint?) or when the number of targets is unknown (Have we found all the tumors?). This chapter reviews the development of ideas about quitting time in visual search and offers an outline of our current theory.

  11. The role of object categories in hybrid visual and memory search

    PubMed Central

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  12. Group for Research and Assessment of Psoriasis and Psoriatic Arthritis/Outcome Measures in Rheumatology Consensus-Based Recommendations and Research Agenda for Use of Composite Measures and Treatment Targets in Psoriatic Arthritis.

    PubMed

    Coates, Laura C; FitzGerald, Oliver; Merola, Joseph F; Smolen, Josef; van Mens, Leonieke J J; Bertheussen, Heidi; Boehncke, Wolf-Henning; Callis Duffin, Kristina; Campbell, Willemina; de Wit, Maarten; Gladman, Dafna; Gottlieb, Alice; James, Jana; Kavanaugh, Arthur; Kristensen, Lars Erik; Kvien, Tore K; Luger, Thomas; McHugh, Neil; Mease, Philip; Nash, Peter; Ogdie, Alexis; Rosen, Cheryl F; Strand, Vibeke; Tillett, William; Veale, Douglas J; Helliwell, Philip S

    2018-03-01

    A meeting was convened by the Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) and Outcome Measures in Rheumatology (OMERACT) to further the development of consensus among physicians and patients regarding composite disease activity measures and targets in psoriatic arthritis (PsA). Prior to the meeting, physicians and patients completed surveys on outcome measures. A consensus meeting of 26 rheumatologists, dermatologists, and patient research partners reviewed evidence on composite measures and potential treatment targets plus results of the surveys. The meeting consisted of plenary presentations, breakout sessions, and group discussions. International experts including members of GRAPPA and OMERACT were invited to the meeting, including the developers of all of the measures discussed. After discussions, participants voted on proposals for use, and consensus was established in a second survey. Survey results from 128 health care professionals and 139 patients were analyzed alongside a systematic literature review summarizing evidence. A weighted vote was cast for composite measures. For randomized controlled trials, the most popular measures were the PsA disease activity score (40 votes) and the GRAPPA composite index (28 votes). For clinical practice, the most popular measures were an average of scores on 3 visual analog scales (45 votes) and the disease activity in PsA score (26 votes). After discussion, there was no consensus on a composite measure. The group agreed that several composite measures could be used and that future studies should allow further validation and comparison. The group unanimously agreed that remission should be the ideal target, with minimal disease activity (MDA)/low disease activity as a feasible alternative. The target should include assessment of musculoskeletal disease, skin disease, and health-related quality of life. The group recommended a treatment target of very low disease activity (VLDA) or MDA. Consensus was not reached on a continuous measure of disease activity. In the interim, the group recommended several composites. Consensus was reached on a treatment target of VLDA/MDA. An extensive research agenda was composed and recommends that data on all PsA clinical domains be collected in ongoing studies. © 2017, American College of Rheumatology.

  13. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.

    PubMed

    Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H

    2013-07-01

    Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.

  14. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  15. Neural activity in superior parietal cortex during rule-based visual-motor transformations.

    PubMed

    Hawkins, Kara M; Sayegh, Patricia; Yan, Xiaogang; Crawford, J Douglas; Sergio, Lauren E

    2013-03-01

    Cognition allows for the use of different rule-based sensorimotor strategies, but the neural underpinnings of such strategies are poorly understood. The purpose of this study was to compare neural activity in the superior parietal lobule during a standard (direct interaction) reaching task, with two nonstandard (gaze and reach spatially incongruent) reaching tasks requiring the integration of rule-based information. Specifically, these nonstandard tasks involved dissociating the planes of reach and vision or rotating visual feedback by 180°. Single unit activity, gaze, and reach trajectories were recorded from two female Macaca mulattas. In all three conditions, we observed a temporal discharge pattern at the population level reflecting early reach planning and on-line reach monitoring. In the plane-dissociated task, we found a significant overall attenuation in the discharge rate of cells from deep recording sites, relative to standard reaching. We also found that cells modulated by reach direction tended to be significantly tuned either during the standard or the plane-dissociated task but rarely during both. In the standard versus feedback reversal comparison, we observed some cells that shifted their preferred direction by 180° between conditions, reflecting maintenance of directional tuning with respect to the reach goal. Our findings suggest that the superior parietal lobule plays an important role in processing information about the nonstandard nature of a task, which, through reciprocal connections with precentral motor areas, contributes to the accurate transformation of incongruent sensory inputs into an appropriate motor output. Such processing is crucial for the integration of rule-based information into a motor act.

  16. Fitts’ Law in the Control of Isometric Grip Force With Naturalistic Targets

    PubMed Central

    Thumser, Zachary C.; Slifkin, Andrew B.; Beckler, Dylan T.; Marasco, Paul D.

    2018-01-01

    Fitts’ law models the relationship between amplitude, precision, and speed of rapid movements. It is widely used to quantify performance in pointing tasks, study human-computer interaction, and generally to understand perceptual-motor information processes, including research to model performance in isometric force production tasks. Applying Fitts’ law to an isometric grip force task would allow for quantifying grasp performance in rehabilitative medicine and may aid research on prosthetic control and design. We examined whether Fitts’ law would hold when participants attempted to accurately produce their intended force output while grasping a manipulandum when presented with images of various everyday objects (we termed this the implicit task). Although our main interest was the implicit task, to benchmark it and establish validity, we examined performance against a more standard visual feedback condition via a digital force-feedback meter on a video monitor (explicit task). Next, we progressed from visual force feedback with force meter targets to the same targets without visual force feedback (operating largely on feedforward control with tactile feedback). This provided an opportunity to see if Fitts’ law would hold without vision, and allowed us to progress toward the more naturalistic implicit task (which does not include visual feedback). Finally, we changed the nature of the targets from requiring explicit force values presented as arrows on a force-feedback meter (explicit targets) to the more naturalistic and intuitive target forces implied by images of objects (implicit targets). With visual force feedback the relation between task difficulty and the time to produce the target grip force was predicted by Fitts’ law (average r2 = 0.82). Without vision, average grip force scaled accurately although force variability was insensitive to the target presented. In contrast, images of everyday objects generated more reliable grip forces without the visualized force meter. In sum, population means were well-described by Fitts’ law for explicit targets with vision (r2 = 0.96) and implicit targets (r2 = 0.89), but not as well-described for explicit targets without vision (r2 = 0.54). Implicit targets should provide a realistic see-object-squeeze-object test using Fitts’ law to quantify the relative speed-accuracy relationship of any given grasper. PMID:29773999

  17. An Assessment of EU 2020 Strategy: Too Far to Reach?

    ERIC Educational Resources Information Center

    Colak, Mehmet Selman; Ege, Aylin

    2013-01-01

    In 2010, EU adopted a new growth strategy which includes three growth priorities and five headline targets to be reached by 2020. The aim of this paper is to investigate the current performance of the EU member and candidate states in achieving these growth priorities and the overall strategy target by allocating the headline targets into the…

  18. Active listening impairs visual perception and selectivity: an ERP study of auditory dual-task costs on visual attention.

    PubMed

    Gherri, Elena; Eimer, Martin

    2011-04-01

    The ability to drive safely is disrupted by cell phone conversations, and this has been attributed to a diversion of attention from the visual environment. We employed behavioral and ERP measures to study whether the attentive processing of spoken messages is, in itself, sufficient to produce visual-attentional deficits. Participants searched for visual targets defined by a unique feature (Experiment 1) or feature conjunction (Experiment 2), and simultaneously listened to narrated text passages that had to be recalled later (encoding condition), or heard backward-played speech sounds that could be ignored (control condition). Responses to targets were slower in the encoding condition, and ERPs revealed that the visual processing of search arrays and the attentional selection of target stimuli were less efficient in the encoding relative to the control condition. Results demonstrate that the attentional processing of visual information is impaired when concurrent spoken messages are encoded and maintained, in line with cross-modal links in selective attention, but inconsistent with the view that attentional resources are modality-specific. The distraction of visual attention by active listening could contribute to the adverse effects of cell phone use on driving performance.

  19. Rapid feedback responses correlate with reach adaptation and properties of novel upper limb loads.

    PubMed

    Cluff, Tyler; Scott, Stephen H

    2013-10-02

    A hallmark of voluntary motor control is the ability to adjust motor patterns for novel mechanical or visuomotor contexts. Recent work has also highlighted the importance of feedback for voluntary control, leading to the hypothesis that feedback responses should adapt when we learn new motor skills. We tested this prediction with a novel paradigm requiring that human subjects adapt to a viscous elbow load while reaching to three targets. Target 1 required combined shoulder and elbow motion, target 2 required only elbow motion, and target 3 (probe target) required shoulder but no elbow motion. This simple approach controlled muscle activity at the probe target before, during, and after the application of novel elbow loads. Our paradigm allowed us to perturb the elbow during reaching movements to the probe target and identify several key properties of adapted stretch responses. Adapted long-latency responses expressed (de-) adaptation similar to reaching errors observed when we introduced (removed) the elbow load. Moreover, reaching errors during learning correlated with changes in the long-latency response, showing subjects who adapted more to the elbow load displayed greater modulation of their stretch responses. These adapted responses were sensitive to the size and direction of the viscous training load. Our results highlight an important link between the adaptation of feedforward and feedback control and suggest a key part of motor adaptation is to adjust feedback responses to the requirements of novel motor skills.

  20. I can see what you are saying: Auditory labels reduce visual search times.

    PubMed

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.

Top