Beyond Control Panels: Direct Manipulation for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Bradel, Lauren; North, Chris
2013-07-19
Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less
ERIC Educational Resources Information Center
Hendrickson, Homer
1988-01-01
Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…
Thaler, Lore; Goodale, Melvyn A.
2011-01-01
Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474
Stuart, Samuel; Galna, Brook; Delicato, Louise S; Lord, Sue; Rochester, Lynn
2017-07-01
Gait impairment is a core feature of Parkinson's disease (PD) which has been linked to cognitive and visual deficits, but interactions between these features are poorly understood. Monitoring saccades allows investigation of real-time cognitive and visual processes and their impact on gait when walking. This study explored: (i) saccade frequency when walking under different attentional manipulations of turning and dual-task; and (ii) direct and indirect relationships between saccades, gait impairment, vision and attention. Saccade frequency (number of fast eye movements per-second) was measured during gait in 60 PD and 40 age-matched control participants using a mobile eye-tracker. Saccade frequency was significantly reduced in PD compared to controls during all conditions. However, saccade frequency increased with a turn and decreased under dual-task for both groups. Poorer attention directly related to saccade frequency, visual function and gait impairment in PD, but not controls. Saccade frequency did not directly relate to gait in PD, but did in controls. Instead, saccade frequency and visual function deficit indirectly impacted gait impairment in PD, which was underpinned by their relationship with attention. In conclusion, our results suggest a vital role for attention with direct and indirect influences on gait impairment in PD. Attention directly impacted saccade frequency, visual function and gait impairment in PD, with connotations for falls. It also underpinned indirect impact of visual and saccadic impairment on gait. Attention therefore represents a key therapeutic target that should be considered in future research. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Kumru, Hatice; Pelayo, Raul; Vidal, Joan; Tormos, Josep Maria; Fregni, Felipe; Navarro, Xavier; Pascual-Leone, Alvaro
2010-01-01
The aim of this study was to evaluate the analgesic effect of transcranial direct current stimulation of the motor cortex and techniques of visual illusion, applied isolated or combined, in patients with neuropathic pain following spinal cord injury. In a sham controlled, double-blind, parallel group design, 39 patients were randomized into four groups receiving transcranial direct current stimulation with walking visual illusion or with control illusion and sham stimulation with visual illusion or with control illusion. For transcranial direct current stimulation, the anode was placed over the primary motor cortex. Each patient received ten treatment sessions during two consecutive weeks. Clinical assessment was performed before, after the last day of treatment, after 2 and 4 weeks follow-up and after 12 weeks. Clinical assessment included overall pain intensity perception, Neuropathic Pain Symptom Inventory and Brief Pain Inventory. The combination of transcranial direct current stimulation and visual illusion reduced the intensity of neuropathic pain significantly more than any of the single interventions. Patients receiving transcranial direct current stimulation and visual illusion experienced a significant improvement in all pain subtypes, while patients in the transcranial direct current stimulation group showed improvement in continuous and paroxysmal pain, and those in the visual illusion group improved only in continuous pain and dysaesthesias. At 12 weeks after treatment, the combined treatment group still presented significant improvement on the overall pain intensity perception, whereas no improvements were reported in the other three groups. Our results demonstrate that transcranial direct current stimulation and visual illusion can be effective in the management of neuropathic pain following spinal cord injury, with minimal side effects and with good tolerability. PMID:20685806
Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V
2014-09-01
Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.
Effect of visuomotor-map uncertainty on visuomotor adaptation.
Saijo, Naoki; Gomi, Hiroaki
2012-03-01
Vision and proprioception contribute to generating hand movement. If a conflict between the visual and proprioceptive feedback of hand position is given, reaching movement is disturbed initially but recovers after training. Although previous studies have predominantly investigated the adaptive change in the motor output, it is unclear whether the contributions of visual and proprioceptive feedback controls to the reaching movement are modified by visuomotor adaptation. To investigate this, we focused on the change in proprioceptive feedback control associated with visuomotor adaptation. After the adaptation to gradually introduce visuomotor rotation, the hand reached the shifted position of the visual target to move the cursor to the visual target correctly. When the cursor feedback was occasionally eliminated (probe trial), the end point of the hand movement was biased in the visual-target direction, while the movement was initiated in the adapted direction, suggesting the incomplete adaptation of proprioceptive feedback control. Moreover, after the learning of uncertain visuomotor rotation, in which the rotation angle was randomly fluctuated on a trial-by-trial basis, the end-point bias in the probe trial increased, but the initial movement direction was not affected, suggesting a reduction in the adaptation level of proprioceptive feedback control. These results suggest that the change in the relative contribution of visual and proprioceptive feedback controls to the reaching movement in response to the visuomotor-map uncertainty is involved in visuomotor adaptation, whereas feedforward control might adapt in a manner different from that of the feedback control.
The role of peripheral vision in saccade planning: learning from people with tunnel vision.
Luo, Gang; Vargas-Martin, Fernando; Peli, Eli
2008-12-22
Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7 degrees-16 degrees) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n = 9). In the walking experiment, the patients (n = 5) and normal controls (n = 3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the large extent of the top-down mechanism influence on eye movement control.
Role of peripheral vision in saccade planning: Learning from people with tunnel vision
Luo, Gang; Vargas-Martin, Fernando; Peli, Eli
2008-01-01
Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7°–16°) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n=9). In the walking experiment, the patients (n=5) and normal controls (n=3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the extent of the top-down mechanism influence on eye movement control. PMID:19146326
Gagliardo, A.; Odetti, F.; Ioalè, P.
2001-01-01
Whether pigeons use visual landmarks for orientation from familiar locations has been a subject of debate. By recording the directional choices of both anosmic and control pigeons while exiting from a circular arena we were able to assess the relevance of olfactory and visual cues for orientation from familiar sites. When the birds could see the surroundings, both anosmic and control pigeons were homeward oriented. When the view of the landscape was prevented by screens that surrounded the arena, the control pigeons exited from the arena approximately in the home direction, while the anosmic pigeons' distribution was not different from random. Our data suggest that olfactory and visual cues play a critical, but interchangeable, role for orientation at familiar sites. PMID:11571054
Matsunaka, Kumiko; Shibata, Yuki; Yamamoto, Toshikazu
2008-08-01
Study 1 investigated individual differences in spatial cognition amongst visually impaired students and sighted controls, as well as the extent to which visual status contributes to these individual differences. Fifty-eight visually impaired and 255 sighted university students evaluated their sense of direction via self-ratings. Visual impairment contributed to the factors associated with the use and understanding of maps, confirming that maps are generally unfamiliar to visually impaired people. The relationship between psychological stress associated with mobility and individual differences in sense of direction was investigated in Study 2. A stress checklist was administered to the 51 visually impaired students who participated in Study 1. Psychological stress level was related to understanding and use of maps, as well as orientation and renewal, that is, course correction after being got lost. Central visual field deficits were associated with greater mobility-related stress levels than peripheral visual field deficits.
Rogerson, Mike; Barton, Jo
2015-01-01
Green exercise research often reports psychological health outcomes without rigorously controlling exercise. This study examines effects of visual exercise environments on directed attention, perceived exertion and time to exhaustion, whilst measuring and controlling the exercise component. Participants completed three experimental conditions in a randomized counterbalanced order. Conditions varied by video content viewed (nature; built; control) during two consistently-ordered exercise bouts (Exercise 1: 60% VO2peakInt for 15-mins; Exercise 2: 85% VO2peakInt to voluntary exhaustion). In each condition, participants completed modified Backwards Digit Span tests (a measure of directed attention) pre- and post-Exercise 1. Energy expenditure, respiratory exchange ratio and perceived exertion were measured during both exercise bouts. Time to exhaustion in Exercise 2 was also recorded. There was a significant time by condition interaction for Backwards Digit Span scores (F2,22 = 6.267, p = 0.007). Scores significantly improved in the nature condition (p < 0.001) but did not in the built or control conditions. There were no significant differences between conditions for either perceived exertion or physiological measures during either Exercise 1 or Exercise 2, or for time to exhaustion in Exercise 2. This was the first study to demonstrate effects of controlled exercise conducted in different visual environments on post-exercise directed attention. Via psychological mechanisms alone, visual nature facilitates attention restoration during moderate-intensity exercise. PMID:26133125
Rogerson, Mike; Barton, Jo
2015-06-30
Green exercise research often reports psychological health outcomes without rigorously controlling exercise. This study examines effects of visual exercise environments on directed attention, perceived exertion and time to exhaustion, whilst measuring and controlling the exercise component. Participants completed three experimental conditions in a randomized counterbalanced order. Conditions varied by video content viewed (nature; built; control) during two consistently-ordered exercise bouts (Exercise 1: 60% VO2peakInt for 15-mins; Exercise 2: 85% VO2peakInt to voluntary exhaustion). In each condition, participants completed modified Backwards Digit Span tests (a measure of directed attention) pre- and post-Exercise 1. Energy expenditure, respiratory exchange ratio and perceived exertion were measured during both exercise bouts. Time to exhaustion in Exercise 2 was also recorded. There was a significant time by condition interaction for Backwards Digit Span scores (F2,22 = 6.267, p = 0.007). Scores significantly improved in the nature condition (p < 0.001) but did not in the built or control conditions. There were no significant differences between conditions for either perceived exertion or physiological measures during either Exercise 1 or Exercise 2, or for time to exhaustion in Exercise 2. This was the first study to demonstrate effects of controlled exercise conducted in different visual environments on post-exercise directed attention. Via psychological mechanisms alone, visual nature facilitates attention restoration during moderate-intensity exercise.
Causal evidence for retina dependent and independent visual motion computations in mouse cortex
Hillier, Daniel; Fiscella, Michele; Drinnenberg, Antonia; Trenholm, Stuart; Rompani, Santiago B.; Raics, Zoltan; Katona, Gergely; Juettner, Josephine; Hierlemann, Andreas; Rozsa, Balazs; Roska, Botond
2017-01-01
How neuronal computations in the sensory periphery contribute to computations in the cortex is not well understood. We examined this question in the context of visual-motion processing in the retina and primary visual cortex (V1) of mice. We disrupted retinal direction selectivity – either exclusively along the horizontal axis using FRMD7 mutants or along all directions by ablating starburst amacrine cells – and monitored neuronal activity in layer 2/3 of V1 during stimulation with visual motion. In control mice, we found an overrepresentation of cortical cells preferring posterior visual motion, the dominant motion direction an animal experiences when it moves forward. In mice with disrupted retinal direction selectivity, the overrepresentation of posterior-motion-preferring cortical cells disappeared, and their response at higher stimulus speeds was reduced. This work reveals the existence of two functionally distinct, sensory-periphery-dependent and -independent computations of visual motion in the cortex. PMID:28530661
Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn
2014-10-08
Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.
Effects of visual motion consistent or inconsistent with gravity on postural sway.
Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo
2017-07-01
Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
Priming and the guidance by visual and categorical templates in visual search.
Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L
2014-01-01
Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.
Influence of gymnastics training on the development of postural control.
Garcia, Claudia; Barela, José Angelo; Viana, André Rocha; Barela, Ana Maria Forti
2011-03-29
This study investigated the influence of gymnastics training on the postural control of children with and without the use of visual information. Two age groups, aged 5-7 and 9-11 years old, of gymnasts and nongymnasts were asked to maintain an upright and quiet stance on a force platform with eyes open (EO) and eyes closed (EC) for 30s. Area of the stabilogram (AOS) and mean velocity of the center of pressure (COP) in anterior-posterior (AP) and medial-lateral (ML) directions were calculated and used to investigate the effects of gymnastics training, age, and visual information. Younger gymnasts presented greater postural control compared to younger nongymnasts while visual information did not improve postural control in younger nongymnasts. Younger gymnasts displayed improved postural control with EO compared to EC. The mean velocity of the COP in the ML direction was: less for younger gymnasts than younger nongymnasts with EO. These results suggest that gymnastics training promotes improvements in postural control of younger children only, which results from their use of visual information when available. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Visual Network Asymmetry and Default Mode Network Function in ADHD: An fMRI Study
Hale, T. Sigi; Kane, Andrea M.; Kaminsky, Olivia; Tung, Kelly L.; Wiley, Joshua F.; McGough, James J.; Loo, Sandra K.; Kaplan, Jonas T.
2014-01-01
Background: A growing body of research has identified abnormal visual information processing in attention-deficit hyperactivity disorder (ADHD). In particular, slow processing speed and increased reliance on visuo-perceptual strategies have become evident. Objective: The current study used recently developed fMRI methods to replicate and further examine abnormal rightward biased visual information processing in ADHD and to further characterize the nature of this effect; we tested its association with several large-scale distributed network systems. Method: We examined fMRI BOLD response during letter and location judgment tasks, and directly assessed visual network asymmetry and its association with large-scale networks using both a voxelwise and an averaged signal approach. Results: Initial within-group analyses revealed a pattern of left-lateralized visual cortical activity in controls but right-lateralized visual cortical activity in ADHD children. Direct analyses of visual network asymmetry confirmed atypical rightward bias in ADHD children compared to controls. This ADHD characteristic was atypically associated with reduced activation across several extra-visual networks, including the default mode network (DMN). We also found atypical associations between DMN activation and ADHD subjects’ inattentive symptoms and task performance. Conclusion: The current study demonstrated rightward VNA in ADHD during a simple letter discrimination task. This result adds an important novel consideration to the growing literature identifying abnormal visual processing in ADHD. We postulate that this characteristic reflects greater perceptual engagement of task-extraneous content, and that it may be a basic feature of less efficient top-down task-directed control over visual processing. We additionally argue that abnormal DMN function may contribute to this characteristic. PMID:25076915
Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search
Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.
2012-01-01
Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511
The use of visual cues for vehicle control and navigation
NASA Technical Reports Server (NTRS)
Hart, Sandra G.; Battiste, Vernol
1991-01-01
At least three levels of control are required to operate most vehicles: (1) inner-loop control to counteract the momentary effects of disturbances on vehicle position; (2) intermittent maneuvers to avoid obstacles, and (3) outer-loop control to maintain a planned route. Operators monitor dynamic optical relationships in their immediate surroundings to estimate momentary changes in forward, lateral, and vertical position, rates of change in speed and direction of motion, and distance from obstacles. The process of searching the external scene to find landmarks (for navigation) is intermittent and deliberate, while monitoring and responding to subtle changes in the visual scene (for vehicle control) is relatively continuous and 'automatic'. However, since operators may perform both tasks simultaneously, the dynamic optical cues available for a vehicle control task may be determined by the operator's direction of gaze for wayfinding. An attempt to relate the visual processes involved in vehicle control and wayfinding is presented. The frames of reference and information used by different operators (e.g., automobile drivers, airline pilots, and helicopter pilots) are reviewed with particular emphasis on the special problems encountered by helicopter pilots flying nap of the earth (NOE). The goal of this overview is to describe the context within which different vehicle control tasks are performed and to suggest ways in which the use of visual cues for geographical orientation might influence visually guided control activities.
Transcranial direct current stimulation enhances recovery of stereopsis in adults with amblyopia.
Spiegel, Daniel P; Li, Jinrong; Hess, Robert F; Byblow, Winston D; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2013-10-01
Amblyopia is a neurodevelopmental disorder of vision caused by abnormal visual experience during early childhood that is often considered to be untreatable in adulthood. Recently, it has been shown that a novel dichoptic videogame-based treatment for amblyopia can improve visual function in adult patients, at least in part, by reducing inhibition of inputs from the amblyopic eye to the visual cortex. Non-invasive anodal transcranial direct current stimulation has been shown to reduce the activity of inhibitory cortical interneurons when applied to the primary motor or visual cortex. In this double-blind, sham-controlled cross-over study we tested the hypothesis that anodal transcranial direct current stimulation of the visual cortex would enhance the therapeutic effects of dichoptic videogame-based treatment. A homogeneous group of 16 young adults (mean age 22.1 ± 1.1 years) with amblyopia were studied to compare the effect of dichoptic treatment alone and dichoptic treatment combined with visual cortex direct current stimulation on measures of binocular (stereopsis) and monocular (visual acuity) visual function. The combined treatment led to greater improvements in stereoacuity than dichoptic treatment alone, indicating that direct current stimulation of the visual cortex boosts the efficacy of dichoptic videogame-based treatment. This intervention warrants further evaluation as a novel therapeutic approach for adults with amblyopia.
Effect of travel speed on the visual control of steering toward a goal.
Chen, Rongrong; Niehorster, Diederick C; Li, Li
2018-03-01
Previous studies have proposed that people can use visual cues such as the instantaneous direction (i.e., heading) or future path trajectory of travel specified by optic flow or target visual direction in egocentric space to steer or walk toward a goal. In the current study, we examined what visual cues people use to guide their goal-oriented locomotion and whether their reliance on such visual cues changes as travel speed increases. We presented participants with optic flow displays that simulated their self-motion toward a target at various travel speeds under two viewing conditions in which we made target egocentric direction available or unavailable for steering. We found that for both viewing conditions, participants did not steer along a curved path toward the target such that the actual and the required path curvature to reach the target would converge when approaching the target. At higher travel speeds, participants showed a faster and larger reduction in target-heading angle and more accurate and precise steady-state control of aligning their heading specified by optic flow with the target. These findings support the claim that people use heading and target egocentric direction but not path for goal-oriented locomotion control, and their reliance on heading increases at higher travel speeds. The increased reliance on heading for goal-oriented locomotion control could be due to an increased reliability in perceiving heading from optic flow as the magnitude of flow increases with travel speed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P.; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2016-01-01
Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated with greater deficits in amblyopic eye contrast sensitivity and visual acuity. We tested whether transcranial direct current stimulation (tDCS) of the visual cortex would modulate VEP amplitude and contrast sensitivity in adults with amblyopia. tDCS can transiently alter cortical excitability and may influence suppressive neural interactions. Twenty-one patients with amblyopia and twenty-seven controls completed separate sessions of anodal (a-), cathodal (c-) and sham (s-) visual cortex tDCS. A-tDCS transiently and significantly increased VEP amplitudes for amblyopic, fellow and control eyes and contrast sensitivity for amblyopic and control eyes. C-tDCS decreased VEP amplitude and contrast sensitivity and s-tDCS had no effect. These results suggest that tDCS can modulate visual cortex responses to information from adult amblyopic eyes and provide a foundation for future clinical studies of tDCS in adults with amblyopia. PMID:26763954
Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin
2016-01-14
Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated with greater deficits in amblyopic eye contrast sensitivity and visual acuity. We tested whether transcranial direct current stimulation (tDCS) of the visual cortex would modulate VEP amplitude and contrast sensitivity in adults with amblyopia. tDCS can transiently alter cortical excitability and may influence suppressive neural interactions. Twenty-one patients with amblyopia and twenty-seven controls completed separate sessions of anodal (a-), cathodal (c-) and sham (s-) visual cortex tDCS. A-tDCS transiently and significantly increased VEP amplitudes for amblyopic, fellow and control eyes and contrast sensitivity for amblyopic and control eyes. C-tDCS decreased VEP amplitude and contrast sensitivity and s-tDCS had no effect. These results suggest that tDCS can modulate visual cortex responses to information from adult amblyopic eyes and provide a foundation for future clinical studies of tDCS in adults with amblyopia.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Rosenblatt, Steven David; Crane, Benjamin Thomas
2015-01-01
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Atypical Visual Orienting to Gaze- and Arrow-Cues in Adults with High Functioning Autism
ERIC Educational Resources Information Center
Vlamings, Petra H. J. M.; Stauder, Johannes E. A.; van Son, Ilona A. M.; Mottron, Laurent
2005-01-01
The present study investigates visual orienting to directional cues (arrow or eyes) in adults with high functioning autism (n = 19) and age matched controls (n = 19). A choice reaction time paradigm is used in which eye-or arrow direction correctly (congruent) or incorrectly (incongruent) cues target location. In typically developing participants,…
Bandeira, Igor Dórea; Guimarães, Rachel Silvany Quadros; Jagersbacher, João Gabriel; Barretto, Thiago Lima; de Jesus-Silva, Jéssica Regina; Santos, Samantha Nunes; Argollo, Nayara; Lucena, Rita
2016-06-01
Studies investigating the possible benefits of transcranial direct current stimulation on left dorsolateral prefrontal cortex in children and adolescents with attention-deficit hyperactivity disorder (ADHD) have not been performed. This study assesses the effect of transcranial direct current stimulation in children and adolescents with ADHD on neuropsychological tests of visual attention, visual and verbal working memory, and inhibitory control. An auto-matched clinical trial was performed involving transcranial direct current stimulation in children and adolescents with ADHD, using SNAP-IV and subtests Vocabulary and Cubes of the Wechsler Intelligence Scale for Children III (WISC-III). Subjects were assessed before and after transcranial direct current stimulation sessions with the Digit Span subtest of the WISC-III, inhibitory control subtest of the NEPSY-II, Corsi cubes, and the Visual Attention Test (TAVIS-3). There were 9 individuals with ADHD according to Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition) criteria. There was statistically significant difference in some aspects of TAVIS-3 tests and the inhibitory control subtest of NEPSY-II. Transcranial direct current stimulation can be related to a more efficient processing speed, improved detection of stimuli, and improved ability to switch between an ongoing activity and a new one. © The Author(s) 2016.
Simple control-theoretic models of human steering activity in visually guided vehicle control
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1991-01-01
A simple control theoretic model of human steering or control activity in the lateral-directional control of vehicles such as automobiles and rotorcraft is discussed. The term 'control theoretic' is used to emphasize the fact that the model is derived from a consideration of well-known control system design principles as opposed to psychological theories regarding egomotion, etc. The model is employed to emphasize the 'closed-loop' nature of tasks involving the visually guided control of vehicles upon, or in close proximity to, the earth and to hypothesize how changes in vehicle dynamics can significantly alter the nature of the visual cues which a human might use in such tasks.
A comparative study of visual reaction time in table tennis players and healthy controls.
Bhabhor, Mahesh K; Vidja, Kalpesh; Bhanderi, Priti; Dodhia, Shital; Kathrotia, Rajesh; Joshi, Varsha
2013-01-01
Visual reaction time is time required to response to visual stimuli. The present study was conducted to measure visual reaction time in 209 subjects, 50 table tennis (TT) players and 159 healthy controls. The visual reaction time was measured by the direct RT computerized software in healthy controls and table tennis players. Simple visual reaction time was measured. During the reaction time testing, visual stimuli were given for eighteen times and average reaction time was taken as the final reaction time. The study shows that table tennis players had faster reaction time than healthy controls. On multivariate analysis, it was found that TT players had 74.121 sec (95% CI 98.8 and 49.4 sec) faster reaction time compared to non-TT players of same age and BMI. Also playing TT has a profound influence on visual reaction time than BMI. Our study concluded that persons involved in sports are having good reaction time as compared to controls. These results support the view that playing of table tennis is beneficial to eye-hand reaction time, improve the concentration and alertness.
Visual exploration during locomotion limited by fear of heights.
Kugler, Günter; Huppert, Doreen; Eckl, Maria; Schneider, Erich; Brandt, Thomas
2014-01-01
Visual exploration of the surroundings during locomotion at heights has not yet been investigated in subjects suffering from fear of heights. Eye and head movements were recorded separately in 16 subjects susceptible to fear of heights and in 16 non-susceptible controls while walking on an emergency escape balcony 20 meters above ground level. Participants wore mobile infrared eye-tracking goggles with a head-fixed scene camera and integrated 6-degrees-of-freedom inertial sensors for recording head movements. Video recordings of the subjects were simultaneously made to correlate gaze and gait behavior. Susceptibles exhibited a limited visual exploration of the surroundings, particularly the depth. Head movements were significantly reduced in all three planes (yaw, pitch, and roll) with less vertical head oscillations, whereas total eye movements (saccade amplitudes, frequencies, fixation durations) did not differ from those of controls. However, there was an anisotropy, with a preference for the vertical as opposed to the horizontal direction of saccades. Comparison of eye and head movement histograms and the resulting gaze-in-space revealed a smaller total area of visual exploration, which was mainly directed straight ahead and covered vertically an area from the horizon to the ground in front of the feet. This gaze behavior was associated with a slow, cautious gait. The visual exploration of the surroundings by susceptibles to fear of heights differs during locomotion at heights from the earlier investigated behavior of standing still and looking from a balcony. During locomotion, anisotropy of gaze-in-space shows a preference for the vertical as opposed to the horizontal direction during stance. Avoiding looking into the abyss may reduce anxiety in both conditions; exploration of the "vertical strip" in the heading direction is beneficial for visual control of balance and avoidance of obstacles during locomotion.
Visual Neuroscience: Unique Neural System for Flight Stabilization in Hummingbirds.
Ibbotson, M R
2017-01-23
The pretectal visual motion processing area in the hummingbird brain is unlike that in other birds: instead of emphasizing detection of horizontal movements, it codes for motion in all directions through 360°, possibly offering precise visual stability control during hovering. Copyright © 2017 Elsevier Ltd. All rights reserved.
Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite
2016-09-01
aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system
Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.
McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth
2015-07-15
A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.
The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.
Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni
2017-09-01
The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P < 0.05). To achieve better control of interaction forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.
Dementia alters standing postural adaptation during a visual search task in older adult men.
Jor'dan, Azizah J; McCarten, J Riley; Rottunda, Susan; Stoffregen, Thomas A; Manor, Brad; Wade, Michael G
2015-04-23
This study investigated the effects of dementia on standing postural adaptation during performance of a visual search task. We recruited 16 older adults with dementia and 15 without dementia. Postural sway was assessed by recording medial-lateral (ML) and anterior-posterior (AP) center-of-pressure when standing with and without a visual search task; i.e., counting target letter frequency within a block of displayed randomized letters. ML sway variability was significantly higher in those with dementia during visual search as compared to those without dementia and compared to both groups during the control condition. AP sway variability was significantly greater in those with dementia as compared to those without dementia, irrespective of task condition. In the ML direction, the absolute and percent change in sway variability between the control condition and visual search (i.e., postural adaptation) was greater in those with dementia as compared to those without. In contrast, postural adaptation to visual search was similar between groups in the AP direction. As compared to those without dementia, those with dementia identified fewer letters on the visual task. In the non-dementia group only, greater increases in postural adaptation in both the ML and AP direction, correlated with lower performance on the visual task. The observed relationship between postural adaptation during the visual search task and visual search task performance--in the non-dementia group only--suggests a critical link between perception and action. Dementia reduces the capacity to perform a visual-based task while standing and thus, appears to disrupt this perception-action synergy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Executive control of stimulus-driven and goal-directed attention in visual working memory.
Hu, Yanmei; Allen, Richard J; Baddeley, Alan D; Hitch, Graham J
2016-10-01
We examined the role of executive control in stimulus-driven and goal-directed attention in visual working memory using probed recall of a series of objects, a task that allows study of the dynamics of storage through analysis of serial position data. Experiment 1 examined whether executive control underlies goal-directed prioritization of certain items within the sequence. Instructing participants to prioritize either the first or final item resulted in improved recall for these items, and an increase in concurrent task difficulty reduced or abolished these gains, consistent with their dependence on executive control. Experiment 2 examined whether executive control is also involved in the disruption caused by a post-series visual distractor (suffix). A demanding concurrent task disrupted memory for all items except the most recent, whereas a suffix disrupted only the most recent items. There was no interaction when concurrent load and suffix were combined, suggesting that deploying selective attention to ignore the distractor did not draw upon executive resources. A final experiment replicated the independent interfering effects of suffix and concurrent load while ruling out possible artifacts. We discuss the results in terms of a domain-general episodic buffer in which information is retained in a transient, limited capacity privileged state, influenced by both stimulus-driven and goal-directed processes. The privileged state contains the most recent environmental input together with goal-relevant representations being actively maintained using executive resources.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263
Stronger Neural Modulation by Visual Motion Intensity in Autism Spectrum Disorders
Peiker, Ina; Schneider, Till R.; Milne, Elizabeth; Schöttle, Daniel; Vogeley, Kai; Münchau, Alexander; Schunke, Odette; Siegel, Markus; Engel, Andreas K.; David, Nicole
2015-01-01
Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD. PMID:26147342
Localized direction selective responses in the dendrites of visual interneurons of the fly
2010-01-01
Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983
Occupational Health and the Visual Arts: An Introduction.
Hinkamp, David; McCann, Michael; Babin, Angela R
2017-09-01
Occupational hazards in the visual arts often involve hazardous materials, though hazardous equipment and hazardous work conditions can also be found. Occupational health professionals are familiar with most of these hazards and are particularly qualified to contribute clinical and preventive expertise to these issues. Articles illustrating visual arts health issues were sought and reviewed. Literature sources included medical databases, unindexed art-health publications, and popular press articles. Few medical articles examine health issues in the visuals arts directly, but exposures to pigments, solvents, and other hazards found in the visual arts are well described. The hierarchy of controls is an appropriate model for controlling hazards and promoting safer visual art workplaces. The health and safety of those working in the visual arts can benefit from the occupational health approach. Sources of further information are available.
Synthetic perspective optical flow: Influence on pilot control tasks
NASA Technical Reports Server (NTRS)
Bennett, C. Thomas; Johnson, Walter W.; Perrone, John A.; Phatak, Anil V.
1989-01-01
One approach used to better understand the impact of visual flow on control tasks has been to use synthetic perspective flow patterns. Such patterns are the result of apparent motion across a grid or random dot display. Unfortunately, the optical flow so generated is based on a subset of the flow information that exists in the real world. The danger is that the resulting optical motions may not generate the visual flow patterns useful for actual flight control. Researchers conducted a series of studies directed at understanding the characteristics of synthetic perspective flow that support various pilot tasks. In the first of these, they examined the control of altitude over various perspective grid textures (Johnson et al., 1987). Another set of studies was directed at studying the head tracking of targets moving in a 3-D coordinate system. These studies, parametric in nature, utilized both impoverished and complex virtual worlds represented by simple perspective grids at one extreme, and computer-generated terrain at the other. These studies are part of an applied visual research program directed at understanding the design principles required for the development of instruments displaying spatial orientation information. The experiments also highlight the need for modeling the impact of spatial displays on pilot control tasks.
Pilot vision considerations : the effect of age on binocular fusion time.
DOT National Transportation Integrated Search
1966-10-01
The study provides data regarding the relationship between vision performance and age of the individual. It has direct application to pilot visual tasks with respect to instrument panel displays, and to controller visual tasks in association with rad...
Visual scan paths are abnormal in deluded schizophrenics.
Phillips, M L; David, A S
1997-01-01
One explanation for delusion formation is that they result from distorted appreciation of complex stimuli. The study investigated delusions in schizophrenia using a physiological marker of visual attention and information processing, the visual scan path-a map tracing the direction and duration of gaze when an individual views a stimulus. The aim was to demonstrate the presence of a specific deficit in processing meaningful stimuli (e.g. human faces) in deluded schizophrenics (DS) by relating this to abnormal viewing strategies. Visual scan paths were measured in acutely-deluded (n = 7) and non-deluded (n = 7) schizophrenics matched for medication, illness duration and negative symptoms, plus 10 age-matched normal controls. DS employed abnormal strategies for viewing single faces and face pairs in a recognition task, staring at fewer points and fixating non-feature areas to a significantly greater extent than both control groups (P < 0.05). The results indicate that DS direct their attention to less salient visual information when viewing faces. Future paradigms employing more complex stimuli and testing DS when less-deluded will allow further clarification of the relationship between viewing strategies and delusions.
Rotary acceleration of a subject inhibits choice reaction time to motion in peripheral vision
NASA Technical Reports Server (NTRS)
Borkenhagen, J. M.
1974-01-01
Twelve pilots were tested in a rotation device with visual simulation, alone and in combination with rotary stimulation, in experiments with variable levels of acceleration and variable viewing angles, in a study of the effect of S's rotary acceleration on the choice reaction time for an accelerating target in peripheral vision. The pilots responded to the direction of the visual motion by moving a hand controller to the right or left. Visual-plus-rotary stimulation required a longer choice reaction time, which was inversely related to the level of acceleration and directly proportional to the viewing angle.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-04-14
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.
Attention biases visual activity in visual short-term memory.
Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina
2014-07-01
In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.
Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M
2015-05-01
According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ochiai, Tetsuji; Mushiake, Hajime; Tanji, Jun
2005-07-01
The ventral premotor cortex (PMv) has been implicated in the visual guidance of movement. To examine whether neuronal activity in the PMv is involved in controlling the direction of motion of a visual image of the hand or the actual movement of the hand, we trained a monkey to capture a target that was presented on a video display using the same side of its hand as was displayed on the video display. We found that PMv neurons predominantly exhibited premovement activity that reflected the image motion to be controlled, rather than the physical motion of the hand. We also found that the activity of half of such direction-selective PMv neurons depended on which side (left versus right) of the video image of the hand was used to capture the target. Furthermore, this selectivity for a portion of the hand was not affected by changing the starting position of the hand movement. These findings suggest that PMv neurons play a crucial role in determining which part of the body moves in which direction, at least under conditions in which a visual image of a limb is used to guide limb movements.
1990-06-01
Uses visual communication . _._Changes direction/formation __Crews transmit timely, accurate quickly. messages. NOTES. Figure 22. Sample engagement...and concise. The network control station (NCS) effectively maintains network discipline. Radio security equipment, visual communication , wire...net discipline, (c) clarity and brevity of radio messages, (d) use of transmission security equipment, (e) use of visual communication , (f) use of wire
Selective Use of Optical Variables to Control Forward Speed
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Awe, Cynthia A.; Hart, Sandra G. (Technical Monitor)
1994-01-01
Previous work on the perception and control of simulated vehicle speed has examined the contributions of optical flow rate (angular visual speed) and texture, or edge rate (frequency of passing terrain objects or markings) on the perception and control of forward speed. However, these studies have not examined the ability to selectively use edge rate or flow rate. The two studies reported here show that subjects found it very difficult to arbitrarily direct attention to one or the other of these variables; but that the ability to selectively use these variables is linked to the visual contextual information about the relative validity (linkage with speed) of the two variables. The selectivity also resulted in different velocity adaptation levels for events in which flow rate and edge rate specified forward speed. Finally, the role of visual context in directing attention was further buttressed by the finding that the incorrect perception of changes in ground texture density tended to be coupled with incorrect perceptions of changes in forward speed.
Scene perception and the visual control of travel direction in navigating wood ants
Collett, Thomas S.; Lent, David D.; Graham, Paul
2014-01-01
This review reflects a few of Mike Land's many and varied contributions to visual science. In it, we show for wood ants, as Mike has done for a variety of animals, including readers of this piece, what can be learnt from a detailed analysis of an animal's visually guided eye, head or body movements. In the case of wood ants, close examination of their body movements, as they follow visually guided routes, is starting to reveal how they perceive and respond to their visual world and negotiate a path within it. We describe first some of the mechanisms that underlie the visual control of their paths, emphasizing that vision is not the ant's only sense. In the second part, we discuss how remembered local shape-dependent and global shape-independent features of a visual scene may interact in guiding the ant's path. PMID:24395962
ERIC Educational Resources Information Center
Holmes, Scott A.; Heath, Matthew
2013-01-01
An issue of continued debate in the visuomotor control literature surrounds whether a 2D object serves as a representative proxy for a 3D object in understanding the nature of the visual information supporting grasping control. In an effort to reconcile this issue, we examined the extent to which aperture profiles for grasping 2D and 3D objects…
Backman, Chantal; Bruce, Natalie; Marck, Patricia; Vanderloo, Saskia
2016-01-01
The purpose of this quality improvement project was to determine the feasibility of using provider-led participatory visual methods to scrutinize 4 hospital units' infection prevention and control practices. Methods included provider-led photo walkabouts, photo elicitation sessions, and postimprovement photo walkabouts. Nurses readily engaged in using the methods to examine and improve their units' practices and reorganize their work environment.
Bock, Otmar; Bury, Nils
2018-03-01
Our perception of the vertical corresponds to the weighted sum of gravicentric, egocentric, and visual cues. Here we evaluate the interplay of those cues not for the perceived but rather for the motor vertical. Participants were asked to flip an omnidirectional switch down while their egocentric vertical was dissociated from their visual-gravicentric vertical. Responses were directed mid-between the two verticals; specifically, the data suggest that the relative weight of congruent visual-gravicentric cues averages 0.62, and correspondingly, the relative weight of egocentric cues averages 0.38. We conclude that the interplay of visual-gravicentric cues with egocentric cues is similar for the motor and for the perceived vertical. Unexpectedly, we observed a consistent dependence of the motor vertical on hand position, possibly mediated by hand orientation or by spatial selective attention.
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Figure-ground activity in V1 and guidance of saccadic eye movements.
Supèr, Hans
2006-01-01
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.
Optical projectors simulate human eyes to establish operator's field of view
NASA Technical Reports Server (NTRS)
Beam, R. A.
1966-01-01
Device projects visual pattern limits of the field of view of an operator as his eyes are directed at a given point on a control panel. The device, which consists of two projectors, provides instant evaluation of visual ability at a point on a panel.
Do reference surfaces influence exocentric pointing?
Doumen, M J A; Kappers, A M L; Koenderink, J J
2008-06-01
All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer's egocentric distance estimation of an object. We tested whether this is also true for exocentric direction estimations. We used an exocentric pointing task to test whether the presence of poster-boards in the visual scene would influence the perception of the exocentric direction between two test-objects. In this task the observer has to direct a pointer, with a remote control, to a target. We placed the poster-boards at various positions in the visual field to test whether these boards would affect the settings of the observer. We found that they only affected the settings when they directly served as a reference for orienting the pointer to the target.
Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H
2016-02-24
Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention effect. The data will be used to design a large-scale randomised controlled trial to evaluate fully the Visual Rehabilitation Officer intervention. A rigorous evaluation of Rehabilitation Officer input is vital to direct a future low vision rehabilitation strategy and to help direct government resources. The trial was registered with ( ISRCTN44807874 ) on 9 March 2015.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-01-01
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189
The 3D widgets for exploratory scientific visualization
NASA Technical Reports Server (NTRS)
Herndon, Kenneth P.; Meyer, Tom
1995-01-01
Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.
The Impact of Different Visual Feedbacks in User Training on Motor Imagery Control in BCI.
Zapała, Dariusz; Francuz, Piotr; Zapała, Ewelina; Kopiś, Natalia; Wierzgała, Piotr; Augustynowicz, Paweł; Majkowski, Andrzej; Kołodziej, Marcin
2018-03-01
The challenges of research into brain-computer interfaces (BCI) include significant individual differences in learning pace and in the effective operation of BCI devices. The use of neurofeedback training is a popular method of improving the effectiveness BCI operation. The purpose of the present study was to determine to what extent it is possible to improve the effectiveness of operation of sensorimotor rhythm-based brain-computer interfaces (SMR-BCI) by supplementing user training with elements modifying the characteristics of visual feedback. Four experimental groups had training designed to reinforce BCI control by: visual feedback in the form of dummy faces expressing emotions (Group 1); flashing the principal elements of visual feedback (Group 2) and giving both visual feedbacks in one condition (Group 3). The fourth group participated in training with no modifications (Group 4). Training consisted of a series of trials where the subjects directed a ball into a basket located to the right or left side of the screen. In Group 1 a schematic image a face, placed on the controlled object, showed various emotions, depending on the accuracy of control. In Group 2, the cue and targets were flashed with different frequency (4 Hz) than the remaining elements visible on the monitor. Both modifications were also used simultaneously in Group 3. SMR activity during the task was recorded before and after the training. In Group 3 there was a significant improvement in SMR control, compared to subjects in Group 2 and 4 (control). Differences between subjects in Groups 1, 2 and 4 (control) were insignificant. This means that relatively small changes in the training procedure may significantly impact the effectiveness of BCI control. Analysis of behavioural data acquired from all participants at training showed greater effectiveness in directing the object towards the right side of the screen. Subjects with the greatest improvement in SMR control showed a significantly lower difference in the accuracy of rightward and leftward movement than others.
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
2014-12-01
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
2017-10-01
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Minimum viewing angle for visually guided ground speed control in bumblebees.
Baird, Emily; Kornfeldt, Torill; Dacke, Marie
2010-05-01
To control flight, flying insects extract information from the pattern of visual motion generated during flight, known as optic flow. To regulate their ground speed, insects such as honeybees and Drosophila hold the rate of optic flow in the axial direction (front-to-back) constant. A consequence of this strategy is that its performance varies with the minimum viewing angle (the deviation from the frontal direction of the longitudinal axis of the insect) at which changes in axial optic flow are detected. The greater this angle, the later changes in the rate of optic flow, caused by changes in the density of the environment, will be detected. The aim of the present study is to examine the mechanisms of ground speed control in bumblebees and to identify the extent of the visual range over which optic flow for ground speed control is measured. Bumblebees were trained to fly through an experimental tunnel consisting of parallel vertical walls. Flights were recorded when (1) the distance between the tunnel walls was either 15 or 30 cm, (2) the visual texture on the tunnel walls provided either strong or weak optic flow cues and (3) the distance between the walls changed abruptly halfway along the tunnel's length. The results reveal that bumblebees regulate ground speed using optic flow cues and that changes in the rate of optic flow are detected at a minimum viewing angle of 23-30 deg., with a visual field that extends to approximately 155 deg. By measuring optic flow over a visual field that has a low minimum viewing angle, bumblebees are able to detect and respond to changes in the proximity of the environment well before they are encountered.
A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.
Duistermars, Brian J; Frye, Mark
2008-11-21
It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.
Direct visuomotor mapping for fast visually-evoked arm movements.
Reynolds, Raymond F; Day, Brian L
2012-12-01
In contrast to conventional reaction time (RT) tasks, saccadic RT's to visual targets are very fast and unaffected by the number of possible targets. This can be explained by the sub-cortical circuitry underlying eye movements, which involves direct mapping between retinal input and motor output in the superior colliculus. Here we asked if the choice-invariance established for the eyes also applies to a special class of fast visuomotor responses of the upper limb. Using a target-pointing paradigm we observed very fast reaction times (<150 ms) which were completely unaffected as the number of possible target choices was increased from 1 to 4. When we introduced a condition of altered stimulus-response mapping, RT went up and a cost of choice was observed. These results can be explained by direct mapping between visual input and motor output, compatible with a sub-cortical pathway for visual control of the upper limb. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pomares, Jorge; Felicetti, Leonard; Pérez, Javier; Emami, M. Reza
2018-02-01
An image-based servo controller for the guidance of a spacecraft during non-cooperative rendezvous is presented in this paper. The controller directly utilizes the visual features from image frames of a target spacecraft for computing both attitude and orbital maneuvers concurrently. The utilization of adaptive optics, such as zooming cameras, is also addressed through developing an invariant-image servo controller. The controller allows for performing rendezvous maneuvers independently from the adjustments of the camera focal length, improving the performance and versatility of maneuvers. The stability of the proposed control scheme is proven analytically in the invariant space, and its viability is explored through numerical simulations.
Goodale, M A; Murison, R C
1975-05-02
The effects of bilateral removal of the superior colliculus or visual cortex on visually guided locomotor movements in rats performing a brightness discrimination task were investigated directly with the use of cine film. Rats with collicular lesions showed patterns of locomotion comparable to or more efficient than those of normal animals when approaching one of 5 small doors located at one end of a large open area. In contrast, animals with large but incomplete lesions of visual cortex were distinctly impaired in their visual control of approach responses to the same stimuli. On the other hand, rats with collicular damage showed no orienting reflex or evidence of distraction in the same task when novel visual or auditory stimuli were presented. However, both normal and visual-decorticate rats showed various components of the orienting reflex and disturbance in task performance when the same novel stimuli were presented. These results suggest that although the superior colliculus does not appear to be essential to the visual control of locomotor orientation, this midbrain structure might participate in the mediation of shifts in visual fixation and attention. Visual cortex, while contributing to visuospatial guidance of locomotor movements, might not play a significant role in the control and integration of the orienting reflex.
A conditioned visual orientation requires the ellipsoid body in Drosophila
Guo, Chao; Du, Yifei; Yuan, Deliang; Li, Meixia; Gong, Haiyun; Gong, Zhefeng
2015-01-01
Orientation, the spatial organization of animal behavior, is an essential faculty of animals. Bacteria and lower animals such as insects exhibit taxis, innate orientation behavior, directly toward or away from a directional cue. Organisms can also orient themselves at a specific angle relative to the cues. In this study, using Drosophila as a model system, we established a visual orientation conditioning paradigm based on a flight simulator in which a stationary flying fly could control the rotation of a visual object. By coupling aversive heat shocks to a fly's orientation toward one side of the visual object, we found that the fly could be conditioned to orientate toward the left or right side of the frontal visual object and retain this conditioned visual orientation. The lower and upper visual fields have different roles in conditioned visual orientation. Transfer experiments showed that conditioned visual orientation could generalize between visual targets of different sizes, compactness, or vertical positions, but not of contour orientation. Rut—Type I adenylyl cyclase and Dnc—phosphodiesterase were dispensable for visual orientation conditioning. Normal activity and scb signaling in R3/R4d neurons of the ellipsoid body were required for visual orientation conditioning. Our studies established a visual orientation conditioning paradigm and examined the behavioral properties and neural circuitry of visual orientation, an important component of the insect's spatial navigation. PMID:25512578
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Liston, Dorion B.
2011-01-01
Visual motion and other visual cues are used by tower controllers to provide important support for their control tasks at and near airports. These cues are particularly important for anticipated separation. Some of them, which we call visual features, have been identified from structured interviews and discussions with 24 active air traffic controllers or supervisors. The visual information that these features provide has been analyzed with respect to possible ways it could be presented at a remote tower that does not allow a direct view of the airport. Two types of remote towers are possible. One could be based on a plan-view, map-like computer-generated display of the airport and its immediate surroundings. An alternative would present a composite perspective view of the airport and its surroundings, possibly provided by an array of radially mounted cameras positioned at the airport in lieu of a tower. An initial more detailed analyses of one of the specific landing cues identified by the controllers, landing deceleration, is provided as a basis for evaluating how controllers might detect and use it. Understanding other such cues will help identify the information that may be degraded or lost in a remote or virtual tower not located at the airport. Some initial suggestions how some of the lost visual information may be presented in displays are mentioned. Many of the cues considered involve visual motion, though some important static cues are also discussed.
Real-time decoding of the direction of covert visuospatial attention
NASA Astrophysics Data System (ADS)
Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.
2012-08-01
Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.
Effects of continuous visual feedback during sitting balance training in chronic stroke survivors.
Pellegrino, Laura; Giannoni, Psiche; Marinelli, Lucio; Casadio, Maura
2017-10-16
Postural control deficits are common in stroke survivors and often the rehabilitation programs include balance training based on visual feedback to improve the control of body position or of the voluntary shift of body weight in space. In the present work, a group of chronic stroke survivors, while sitting on a force plate, exercised the ability to control their Center of Pressure with a training based on continuous visual feedback. The goal of this study was to test if and to what extent chronic stroke survivors were able to learn the task and transfer the learned ability to a condition without visual feedback and to directions and displacement amplitudes different from those experienced during training. Eleven chronic stroke survivors (5 Male - 6 Female, age: 59.72 ± 12.84 years) participated in this study. Subjects were seated on a stool positioned on top of a custom-built force platform. Their Center of Pressure positions were mapped to the coordinate of a cursor on a computer monitor. During training, the cursor position was always displayed and the subjects were to reach targets by shifting their Center of Pressure by moving their trunk. Pre and post-training subjects were required to reach without visual feedback of the cursor the training targets as well as other targets positioned in different directions and displacement amplitudes. During training, most stroke survivors were able to perform the required task and to improve their performance in terms of duration, smoothness, and movement extent, although not in terms of movement direction. However, when we removed the visual feedback, most of them had no improvement with respect to their pre-training performance. This study suggests that postural training based exclusively on continuous visual feedback can provide limited benefits for stroke survivors, if administered alone. However, the positive gains observed during training justify the integration of this technology-based protocol in a well-structured and personalized physiotherapy training, where the combination of the two approaches may lead to functional recovery.
A computer simulation experiment of supervisory control of remote manipulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Mccandlish, S. G.
1966-01-01
A computer simulation of a remote manipulation task and a rate-controlled manipulator is described. Some low-level automatic decision making ability which could be used at the operator's discretion to augment his direct continuous control was built into the manipulator. Experiments were made on the effect of transmission delay, dynamic lag, and intermittent vision on human manipulative ability. Delay does not make remote manipulation impossible. Intermittent visual feedback, and the absence of rate information in the display presented to the operator do not seem to impair the operator's performance. A small-capacity visual feedback channel may be sufficient for remote manipulation tasks, or one channel might be time-shared between several operators. In other experiments the operator called in sequence various on-site automatic control programs of the machine, and thereby acted as a supervisor. The supervisory mode of operation has some advantages when the task to be performed is difficult for a human controlling directly.
Zobor, Ditta; Strasser, Torsten; Zobor, Gergely; Schober, Franziska; Messias, Andre; Strauss, Olaf; Batra, Anil; Zrenner, Eberhart
2015-04-01
Cannabis is a psychotomimetic agent that induces impairment of sensory perception. We present detailed clinical and electrophysiological data of patients with hallucinogen persisting perception disorder (HPPD) after marijuana consumption. A HPPD patient and four heavy cannabis smokers with no visual disturbances (controls) underwent complete ophthalmological examination including psychophysical tests (visual acuity, color vision, visual field, and dark adaptation) and detailed electrophysiological examinations, including extended Ganzfeld ERG, multifocal ERG, and electrooculography (EOG). Furthermore, electrically evoked phosphene thresholds (EPTs) were measured to further evaluate retinal function. Ophthalmological and most electrophysiological examinations were within normal limits for the HPPD patient and for all control subjects. Interestingly, EOG results of the HPPD patient showed a slightly reduced fast oscillation ratio, diminished standing potentials of the slow oscillations, and a light peak within normal range resulting in higher Arden ratios. The EPTs of the patient were reduced, in particular for pulses with long durations (50 ms) causing visual sensations even at lowest possible currents of the neurostimulator. The control subjects did not reveal such alterations. Our findings suggest a direct effect of cannabinoids on the retina and retinal pigment epithelium function, which may be involved in disturbances of the visual function experienced after drug consumption. The observations presented here may contribute to the elucidation of the detailed mechanism. Furthermore, EOG and EPT measurements may be useful tools to demonstrate long-term retinal alterations in cannabis-induced HPPD in patients.
Effects of Optical Pitch on Oculomotor Control and the Perception of Target Elevation
NASA Technical Reports Server (NTRS)
Cohen, Malcom M.; Ebenholtz, Sheldon M.; Linder, Barry J.
1995-01-01
In two experiments, we used an ISCAN infrared video system to examine the influence of a pitched visual array on gaze elevation and on judgments of visually perceived eye level. In Experiment 1, subjects attempted to direct their gaze to a relaxed or to a horizontal orientation while they were seated in a room whose walls were pitched at various angles with respect to gravity. Gaze elevation was biased in the direction in which the room was pitched. In Experiment 2, subjects looked into a small box that was pitched at various angles while they attempted simply to direct their gaze alone, or to direct their gaze and place a visual target at their apparent horizon. Both gaze elevation and target settings varied systematically with the pitch orientation of the box. Our results suggest that under these conditions, an optostatic response, of which the subject is unaware, is responsible for the changes in both gaze elevation and judgments of target elevation.
The impact of visual gaze direction on auditory object tracking.
Pomper, Ulrich; Chait, Maria
2017-07-05
Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.
Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin
2017-01-01
Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional connectivity with visual cortical nodes during the motion salience task through the amblyopic eye, despite suprathreshold detection performance. This suggests that the reduced ability of the amblyopic eye to activate the frontal components of the attention networks may help explain the aberrant control of visual attention and eye movements in amblyopes. PMID:28484381
Top-down alpha oscillatory network interactions during visuospatial attention orienting.
Doesburg, Sam M; Bedo, Nicolas; Ward, Lawrence M
2016-05-15
Neuroimaging and lesion studies indicate that visual attention is controlled by a distributed network of brain areas. The covert control of visuospatial attention has also been associated with retinotopic modulation of alpha-band oscillations within early visual cortex, which are thought to underlie inhibition of ignored areas of visual space. The relation between distributed networks mediating attention control and more focal oscillatory mechanisms, however, remains unclear. The present study evaluated the hypothesis that alpha-band, directed, network interactions within the attention control network are systematically modulated by the locus of visuospatial attention. We localized brain areas involved in visuospatial attention orienting using magnetoencephalographic (MEG) imaging and investigated alpha-band Granger-causal interactions among activated regions using narrow-band transfer entropy. The deployment of attention to one side of visual space was indexed by lateralization of alpha power changes between about 400ms and 700ms post-cue onset. The changes in alpha power were associated, in the same time period, with lateralization of anterior-to-posterior information flow in the alpha-band from various brain areas involved in attention control, including the anterior cingulate cortex, left middle and inferior frontal gyri, left superior temporal gyrus, and right insula, and inferior parietal lobule, to early visual areas. We interpreted these results to indicate that distributed network interactions mediated by alpha oscillations exert top-down influences on early visual cortex to modulate inhibition of processing for ignored areas of visual space. Copyright © 2016. Published by Elsevier Inc.
Soto, David; Greene, Ciara M; Kiyonaga, Anastasia; Rosenthal, Clive R; Egner, Tobias
2012-12-05
The contents of working memory (WM) can both aid and disrupt the goal-directed allocation of visual attention. WM benefits attention when its contents overlap with goal-relevant stimulus features, but WM leads attention astray when its contents match features of currently irrelevant stimuli. Recent behavioral data have documented that WM biases of attention may be subject to strategic cognitive control processes whereby subjects are able to either enhance or inhibit the influence of WM contents on attention. However, the neural mechanisms supporting cognitive control over WM biases on attention are presently unknown. Here, we characterize these mechanisms by combining human functional magnetic resonance imaging with a task that independently manipulates the relationship between WM cues and attention targets during visual search (with WM contents matching either search targets or distracters), as well as the predictability of this relationship (100 vs 50% predictability) to assess participants' ability to strategically enhance or inhibit WM biases on attention when WM contents reliably matched targets or distracter stimuli, respectively. We show that cues signaling predictable (> unpredictable) WM-attention relations reliably enhanced search performance, and that this strategic modulation of the interplay between WM contents and visual attention was mediated by a neuroanatomical network involving the posterior parietal cortex, the posterior cingulate, and medial temporal lobe structures, with responses in the hippocampus proper correlating with behavioral measures of strategic control of WM biases. Thus, we delineate a novel parieto-medial temporal pathway implementing cognitive control over WM biases to optimize goal-directed selection.
Through the eyes of a bird: modelling visually guided obstacle flight
Lin, Huai-Ti; Ros, Ivo G.; Biewener, Andrew A.
2014-01-01
Various flight navigation strategies for birds have been identified at the large spatial scales of migratory and homing behaviours. However, relatively little is known about close-range obstacle negotiation through cluttered environments. To examine obstacle flight guidance, we tracked pigeons (Columba livia) flying through an artificial forest of vertical poles. Interestingly, pigeons adjusted their flight path only approximately 1.5 m from the forest entry, suggesting a reactive mode of path planning. Combining flight trajectories with obstacle pole positions, we reconstructed the visual experience of the pigeons throughout obstacle flights. Assuming proportional–derivative control with a constant delay, we searched the relevant parameter space of steering gains and visuomotor delays that best explained the observed steering. We found that a pigeon's steering resembles proportional control driven by the error angle between the flight direction and the desired opening, or gap, between obstacles. Using this pigeon steering controller, we simulated obstacle flights and showed that pigeons do not simply steer to the nearest opening in the direction of flight or destination. Pigeons bias their flight direction towards larger visual gaps when making fast steering decisions. The proposed behavioural modelling method converts the obstacle avoidance behaviour into a (piecewise) target-aiming behaviour, which is better defined and understood. This study demonstrates how such an approach decomposes open-loop free-flight behaviours into components that can be independently evaluated. PMID:24812052
Through the eyes of a bird: modelling visually guided obstacle flight.
Lin, Huai-Ti; Ros, Ivo G; Biewener, Andrew A
2014-07-06
Various flight navigation strategies for birds have been identified at the large spatial scales of migratory and homing behaviours. However, relatively little is known about close-range obstacle negotiation through cluttered environments. To examine obstacle flight guidance, we tracked pigeons (Columba livia) flying through an artificial forest of vertical poles. Interestingly, pigeons adjusted their flight path only approximately 1.5 m from the forest entry, suggesting a reactive mode of path planning. Combining flight trajectories with obstacle pole positions, we reconstructed the visual experience of the pigeons throughout obstacle flights. Assuming proportional-derivative control with a constant delay, we searched the relevant parameter space of steering gains and visuomotor delays that best explained the observed steering. We found that a pigeon's steering resembles proportional control driven by the error angle between the flight direction and the desired opening, or gap, between obstacles. Using this pigeon steering controller, we simulated obstacle flights and showed that pigeons do not simply steer to the nearest opening in the direction of flight or destination. Pigeons bias their flight direction towards larger visual gaps when making fast steering decisions. The proposed behavioural modelling method converts the obstacle avoidance behaviour into a (piecewise) target-aiming behaviour, which is better defined and understood. This study demonstrates how such an approach decomposes open-loop free-flight behaviours into components that can be independently evaluated.
Top-Down Beta Enhances Bottom-Up Gamma
Thompson, William H.
2017-01-01
Several recent studies have demonstrated that the bottom-up signaling of a visual stimulus is subserved by interareal gamma-band synchronization, whereas top-down influences are mediated by alpha-beta band synchronization. These processes may implement top-down control of stimulus processing if top-down and bottom-up mediating rhythms are coupled via cross-frequency interaction. To test this possibility, we investigated Granger-causal influences among awake macaque primary visual area V1, higher visual area V4, and parietal control area 7a during attentional task performance. Top-down 7a-to-V1 beta-band influences enhanced visually driven V1-to-V4 gamma-band influences. This enhancement was spatially specific and largest when beta-band activity preceded gamma-band activity by ∼0.1 s, suggesting a causal effect of top-down processes on bottom-up processes. We propose that this cross-frequency interaction mechanistically subserves the attentional control of stimulus selection. SIGNIFICANCE STATEMENT Contemporary research indicates that the alpha-beta frequency band underlies top-down control, whereas the gamma-band mediates bottom-up stimulus processing. This arrangement inspires an attractive hypothesis, which posits that top-down beta-band influences directly modulate bottom-up gamma band influences via cross-frequency interaction. We evaluate this hypothesis determining that beta-band top-down influences from parietal area 7a to visual area V1 are correlated with bottom-up gamma frequency influences from V1 to area V4, in a spatially specific manner, and that this correlation is maximal when top-down activity precedes bottom-up activity. These results show that for top-down processes such as spatial attention, elevated top-down beta-band influences directly enhance feedforward stimulus-induced gamma-band processing, leading to enhancement of the selected stimulus. PMID:28592697
Miall, R Chris; Kitchen, Nick M; Nam, Se-Ho; Lefumat, Hannah; Renault, Alix G; Ørstavik, Kristin; Cole, Jonathan D; Sarlegna, Fabrice R
2018-05-19
It is uncertain how vision and proprioception contribute to adaptation of voluntary arm movements. In normal participants, adaptation to imposed forces is possible with or without vision, suggesting that proprioception is sufficient; in participants with proprioceptive loss (PL), adaptation is possible with visual feedback, suggesting that proprioception is unnecessary. In experiment 1 adaptation to, and retention of, perturbing forces were evaluated in three chronically deafferented participants. They made rapid reaching movements to move a cursor toward a visual target, and a planar robot arm applied orthogonal velocity-dependent forces. Trial-by-trial error correction was observed in all participants. Such adaptation has been characterized with a dual-rate model: a fast process that learns quickly, but retains poorly and a slow process that learns slowly and retains well. Experiment 2 showed that the PL participants had large individual differences in learning and retention rates compared to normal controls. Experiment 3 tested participants' perception of applied forces. With visual feedback, the PL participants could report the perturbation's direction as well as controls; without visual feedback, thresholds were elevated. Experiment 4 showed, in healthy participants, that force direction could be estimated from head motion, at levels close to the no-vision threshold for the PL participants. Our results show that proprioceptive loss influences perception, motor control and adaptation but that proprioception from the moving limb is not essential for adaptation to, or detection of, force fields. The differences in learning and retention seen between the three deafferented participants suggest that they achieve these tasks in idiosyncratic ways after proprioceptive loss, possibly integrating visual and vestibular information with individual cognitive strategies.
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2018-01-01
Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke deficit, has been shown to affect the recovery of locomotion. However, our current understanding of USN role in goal-directed locomotion control, and this, in different cognitive/perceptual conditions tapping into daily life demands, is limited. To examine goal-directed locomotion abilities in individuals with and without post-stroke USN vs. healthy controls. Participants (n = 45, n = 15 per group) performed goal-directed locomotion trials to actual, remembered and shifting targets located 7 m away at 0° and 15° right/left while immersed in a 3-D virtual environment. Greater end-point mediolateral displacement and heading errors (end-point accuracy measures) were found for the actual and the remembered left and right targets among those with post-stroke USN compared to the two other groups (p < 0.05). A delayed onset of reorientation to the left and right shifting targets was also observed in USN+ participants vs. the other two groups (p < 0.05). Results on clinical near space USN assessment and walking speed explained only a third of the variance in goal-directed walking performance. Post-stroke USN was found to affect goal-directed locomotion in different perceptuo-cognitive conditions, both to contralesional and ipsilesional targets, demonstrating the presence of lateralized and non-lateralized deficits. Beyond neglect severity and walking capacity, other factors related to attention, executive functioning and higher-order visual perceptual abilities (e.g. optic flow perception) may account for the goal-directed walking deficits observed in post-stroke USN+. Goal-directed locomotion can be explored in the design of future VR-based evaluation and training tools for USN to improve the currently used conventional methods.
Takeda, Kenta; Mani, Hiroki; Hasegawa, Naoya; Sato, Yuki; Tanaka, Shintaro; Maejima, Hiroshi; Asaka, Tadayoshi
2017-07-19
The benefit of visual feedback of the center of pressure (COP) on quiet standing is still debatable. This study aimed to investigate the adaptation effects of visual feedback training using both the COP and center of gravity (COG) during quiet standing. Thirty-four healthy young adults were divided into three groups randomly (COP + COG, COP, and control groups). A force plate was used to calculate the coordinates of the COP in the anteroposterior (COP AP ) and mediolateral (COP ML ) directions. A motion analysis system was used to calculate the coordinates of the center of mass (COM) in both directions (COM AP and COM ML ). The coordinates of the COG in the AP direction (COG AP ) were obtained from the force plate signals. Augmented visual feedback was presented on a screen in the form of fluctuation circles in the vertical direction that moved upward as the COP AP and/or COG AP moved forward and vice versa. The COP + COG group received the real-time COP AP and COG AP feedback simultaneously, whereas the COP group received the real-time COP AP feedback only. The control group received no visual feedback. In the training session, the COP + COG group was required to maintain an even distance between the COP AP and COG AP and reduce the COG AP fluctuation, whereas the COP group was required to reduce the COP AP fluctuation while standing on a foam pad. In test sessions, participants were instructed to keep their standing posture as quiet as possible on the foam pad before (pre-session) and after (post-session) the training sessions. In the post-session, the velocity and root mean square of COM AP in the COP + COG group were lower than those in the control group. In addition, the absolute value of the sum of the COP - COM distances in the COP + COG group was lower than that in the COP group. Furthermore, positive correlations were found between the COM AP velocity and COP - COM parameters. The results suggest that the novel visual feedback training that incorporates the COP AP -COG AP interaction reduces postural sway better than the training using the COP AP alone during quiet standing. That is, even COP AP fluctuation around the COG AP would be effective in reducing the COM AP velocity.
Do Young Infants Prefer an Infant-Directed Face or a Happy Face?
ERIC Educational Resources Information Center
Kim, Hojin I.; Johnson, Scott P.
2013-01-01
Infants' visual preference for infant-directed (ID) faces over adult-directed (AD) faces was examined in two experiments that introduced controls for emotion. Infants' eye movements were recorded as they viewed a series of side-by-side dynamic faces. When emotion was held constant, 6-month-old infants showed no preference for ID faces over AD…
Direct Manipulation in Virtual Reality
NASA Technical Reports Server (NTRS)
Bryson, Steve
2003-01-01
Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.
Visual attention in violent offenders: Susceptibility to distraction.
Slotboom, Jantine; Hoppenbrouwers, Sylco S; Bouman, Yvonne H A; In 't Hout, Willem; Sergiou, Carmen; van der Stigchel, Stefan; Theeuwes, Jan
2017-05-01
Impairments in executive functioning give rise to reduced control of behavior and impulses, and are therefore a risk factor for violence and criminal behavior. However, the contribution of specific underlying processes remains unclear. A crucial element of executive functioning, and essential for cognitive control and goal-directed behavior, is visual attention. To further elucidate the importance of attentional functioning in the general offender population, we employed an attentional capture task to measure visual attention. We expected offenders to have impaired visual attention, as revealed by increased attentional capture, compared to healthy controls. When comparing the performance of 62 offenders to 69 healthy community controls, we found our hypothesis to be partly confirmed. Offenders were more accurate overall, more accurate in the absence of distracting information, suggesting superior attention. In the presence of distracting information offenders were significantly less accurate compared to when no distracting information was present. Together, these findings indicate that violent offenders may have superior attention, yet worse control over attention. As such, violent offenders may have trouble adjusting to unexpected, irrelevant stimuli, which may relate to failures in self-regulation and inhibitory control. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Action video game training reduces the Simon Effect.
Hutchinson, Claire V; Barrett, Doug J K; Nitka, Aleksander; Raynes, Kerry
2016-04-01
A number of studies have shown that training on action video games improves various aspects of visual cognition including selective attention and inhibitory control. Here, we demonstrate that action video game play can also reduce the Simon Effect, and, hence, may have the potential to improve response selection during the planning and execution of goal-directed action. Non-game-players were randomly assigned to one of four groups; two trained on a first-person-shooter game (Call of Duty) on either Microsoft Xbox or Nintendo DS, one trained on a visual training game for Nintendo DS, and a control group who received no training. Response times were used to contrast performance before and after training on a behavioral assay designed to manipulate stimulus-response compatibility (the Simon Task). The results revealed significantly faster response times and a reduced cost of stimulus-response incompatibility in the groups trained on the first-person-shooter game. No benefit of training was observed in the control group or the group trained on the visual training game. These findings are consistent with previous evidence that action game play elicits plastic changes in the neural circuits that serve attentional control, and suggest training may facilitate goal-directed action by improving players' ability to resolve conflict during response selection and execution.
Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G
2017-08-16
Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space produces spatially specific changes in visual cortex activity in anticipation of a stimulus. The mechanisms controlling these attention-related modulations of sensory cortex, however, are poorly understood. Here, we recorded these two complementary measures of brain activity simultaneously and examined their trial-to-trial covariations to gain insight into these attentional control mechanisms. This multi-methodological approach revealed the attention-related coordination of visual cortex modulation by the subcortical pulvinar nucleus of the thalamus while also disentangling the mechanisms underlying the attentional enhancement of relevant stimulus input and those underlying the concurrent suppression of irrelevant input. Copyright © 2017 the authors 0270-6474/17/377803-08$15.00/0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakumar, B J; Chavez - Alarcon, Ramiro; Shu, Fangjun
The aerodynamics of a flight-worthy, radio controlled ornithopter is investigated using a combination of Particle-Image Velocimetry (PIV), load cell measurements, and high-speed photography of smoke visualizations. The lift and thrust forces of the ornithopter are measured at various flow speeds, flapping frequencies and angles of attack to characterize the flight performance. These direct force measurements are then compared with forces estimated using control volume analysis on PIV data. High-speed photography of smoke streaks is used to visualize the evolution of leading edge vortices, and to qualitatively infer the effect of wing deformation on the net downwash. Vortical structures in themore » wake are compared to previous studies on root flapping, and direct measurements of flapping efficiency are used to argue that the current ornithopter operates sub-optimally in converting the input energy into propulsive work.« less
Takao, Saki; Yamani, Yusuke; Ariga, Atsunori
2018-01-01
The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457
Takao, Saki; Yamani, Yusuke; Ariga, Atsunori
2017-01-01
The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.
Visualization of Electrical Field of Electrode Using Voltage-Controlled Fluorescence Release
Jia, Wenyan; Wu, Jiamin; Gao, Di; Wang, Hao; Sun, Mingui
2016-01-01
In this study we propose an approach to directly visualize electrical current distribution at the electrode-electrolyte interface of a biopotential electrode. High-speed fluorescent microscopic images are acquired when an electric potential is applied across the interface to trigger the release of fluorescent material from the surface of the electrode. These images are analyzed computationally to obtain the distribution of the electric field from the fluorescent intensity of each pixel. Our approach allows direct observation of microscopic electrical current distribution around the electrode. Experiments are conducted to validate the feasibility of the fluorescent imaging method. PMID:27253615
Default Mode Network (DMN) Deactivation during Odor-Visual Association
Karunanayaka, Prasanna R.; Wilson, Donald A.; Tobia, Michael J.; Martinez, Brittany; Meadowcroft, Mark; Eslinger, Paul J.; Yang, Qing X.
2017-01-01
Default mode network (DMN) deactivation has been shown to be functionally relevant for goal-directed cognition. In this study, we investigated the DMN’s role during olfactory processing using two complementary functional magnetic resonance imaging (fMRI) paradigms with identical timing, visual-cue stimulation and response monitoring protocols. Twenty-nine healthy, non-smoking, right-handed adults (mean age = 26±4 yrs., 16 females) completed an odor-visual association fMRI paradigm that had two alternating odor+visual and visual-only trial conditions. During odor+visual trials, a visual cue was presented simultaneously with an odor, while during visual-only trial conditions the same visual cue was presented alone. Eighteen of the 29 participants (mean age = 27.0 ± 6.0 yrs.,11 females) also took part in a control no-odor fMRI paradigm that consisted of visual-only trial conditions which were identical to the visual-only trials in the odor-visual association paradigm. We used Independent Component Analysis (ICA), extended unified structural equation modeling (euSEM), and psychophysiological interaction (PPI) to investigate the interplay between the DMN and olfactory network. In the odor-visual association paradigm, DMN deactivation was evoked by both the odor+visual and visual-only trial conditions. In contrast, the visual-only trials in the no-odor paradigm did not evoke consistent DMN deactivation. In the odor-visual association paradigm, the euSEM and PPI analyses identified a directed connectivity between the DMN and olfactory network which was significantly different between odor+visual and visual-only trial conditions. The results support a strong interaction between the DMN and olfactory network and highlights DMN’s role in task-evoked brain activity and behavioral responses during olfactory processing. PMID:27785847
Visualization of Stereoselective Supramolecular Polymers by Chirality-Controlled Energy Transfer.
Sarkar, Aritra; Dhiman, Shikha; Chalishazar, Aditya; George, Subi J
2017-10-23
Chirality-driven self-sorting is envisaged to efficiently control functional properties in supramolecular materials. However, the challenge arises because of a lack of analytical methods to directly monitor the enantioselectivity of the resulting supramolecular assemblies. Presented herein are two fluorescent core-substituted naphthalene-diimide-based donor and acceptor molecules with minimal structural mismatch and they comprise strong self-recognizing chiral motifs to determine the self-sorting process. As a consequence, stereoselective supramolecular polymerization with an unprecedented chirality control over energy transfer has been achieved. This chirality-controlled energy transfer has been further exploited as an efficient probe to visualize microscopically the chirality driven self-sorting. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Piponnier, Jean-Claude; Hanssens, Jean-Marie; Faubert, Jocelyn
2009-01-14
To examine the respective roles of central and peripheral vision in the control of posture, body sway amplitude (BSA) and postural perturbations (given by velocity root mean square or vRMS) were calculated in a group of 19 healthy young adults. The stimulus was a 3D tunnel, either static or moving sinusoidally in the anterior-posterior direction. There were nine visual field conditions: four central conditions (4, 7, 15, and 30 degrees); four peripheral conditions (central occlusions of 4, 7, 15, and 30 degrees); and a full visual field condition (FF). The virtual tunnel respected all the aspects of a real physical tunnel (i.e., stereoscopy and size increase with proximity). The results show that, under static conditions, central and peripheral visual fields appear to have equal importance for the control of stance. In the presence of an optic flow, peripheral vision plays a crucial role in the control of stance, since it is responsible for a compensatory sway, whereas central vision has an accessory role that seems to be related to spatial orientation.
Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M
2015-03-01
Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.
Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise
2017-01-01
Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444
Learning feedback and feedforward control in a mirror-reversed visual environment.
Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn
2015-10-01
When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.
Learning feedback and feedforward control in a mirror-reversed visual environment
Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi
2015-01-01
When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. PMID:26245313
Control of a visual keyboard using an electrocorticographic brain-computer interface.
Krusienski, Dean J; Shih, Jerry J
2011-05-01
Brain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG. A total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard. This is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.
Swivel arm perimeter for visual field testing in different body positions.
Flammer, J; Hendrickson, P; Lietz, A; Stümpfig, D
1993-01-01
To investigate the influence of body position on visual field results, a 'swivel arm perimeter' was built, based on a modified Octopus 1-2-3. Here, the measuring unit was detected from the control unit and mounted on a swivel arm, allowing its movement in all directions. The first results obtained with this device have indicated that its development was worthwhile.
Effectiveness of basic display augmentation in vehicular control by visual field cues
NASA Technical Reports Server (NTRS)
Grunwald, A. J.; Merhav, S. J.
1978-01-01
The paper investigates the effectiveness of different basic display augmentation concepts - fixed reticle, velocity vector, and predicted future vehicle path - for RPVs controlled by a vehicle-mounted TV camera. The task is lateral manual control of a low flying RPV along a straight reference line in the presence of random side gusts. The man-machine system and the visual interface are modeled as a linear time-invariant system. Minimization of a quadratic performance criterion is assumed to underlie the control strategy of a well-trained human operator. The solution for the optimal feedback matrix enables the explicit computation of the variances of lateral deviation and directional error of the vehicle and of the control force that are used as performance measures.
Descending pathways controlling visually guided updating of reaching in cats.
Pettersson, L-G; Perfiliev, S
2002-10-01
This study uses a previously described paradigm (Pettersson et al., 1997) to investigate the ability of cats to change the direction of ongoing reaching when the target is shifted sideways; the effect on the switching latency of spinal cord lesions was investigated. Large ventral lesions transecting the ventral funicle and the ventral half of the lateral funicle gave a 20-30 ms latency prolongation of switching in the medial (right) direction, but less prolongation of switching directed laterally (left), and in one cat the latencies of switching directed laterally were unchanged. It may be inferred that the command for switching in the lateral direction can be mediated by the dorsally located cortico- and rubrospinal tracts whereas the command for short-latency switching in the medial direction is mediated by ventral pathways. A restricted ventral lesion transecting the tectospinal pathway did not change the switching latency. Comparison of different ventral lesions revealed prolongation of the latency if the lesion included a region extending dorsally along the ventral horn and from there ventrally as a vertical strip, so it may be postulated that the command for fast switching, directed medially, is mediated by a reticulospinal pathway within this location. A hypothesis is forwarded suggesting that the visual control is exerted via ponto-cerebellar pathways.
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2018-04-03
Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke deficit, severely affects functional mobility. Visual perceptual abilities (VPAs) are essential in activities involving mobility. However, whether and to what extent post-stroke USN affects VPAs and how they contribute to mobility impairments remains unclear. To estimate the extent to which VPAs in left and right visual hemispaces are (1) affected in post-stroke USN; and (2) contribute to goal-directed locomotion. Individuals with (USN+, n = 15) and without (USN-, n = 15) post-stroke USN and healthy controls (HC, n = 15) completed (1) psychophysical evaluation of contrast sensitivity, optic flow direction and coherence, and shape discrimination; and (2) goal-directed locomotion tasks. Higher discrimination thresholds were found for all VPAs in the USN+ group compared to USN- and HC groups (p < 0.05). Psychophysical tests showed high sensitivity in detecting deficits in individuals with a history of USN or with no USN on traditional assessments, and were found to be significantly correlated with goal-directed locomotor impairments. Deficits in VPAs may account for the functional difficulties experienced by individuals with post-stroke USN. Psychophysical tests used in the present study offer important advantages and can be implemented to enhance USN diagnostics and rehabilitation.
(Con)text-specific effects of visual dysfunction on reading in posterior cortical atrophy.
Yong, Keir X X; Shakespeare, Timothy J; Cash, Dave; Henley, Susie M D; Warren, Jason D; Crutch, Sebastian J
2014-08-01
Reading deficits are a common early feature of the degenerative syndrome posterior cortical atrophy (PCA) but are poorly understood even at the single word level. The current study evaluated the reading accuracy and speed of 26 PCA patients, 17 typical Alzheimer's disease (tAD) patients and 14 healthy controls on a corpus of 192 single words in which the following perceptual properties were manipulated systematically: inter-letter spacing, font size, length, font type, case and confusability. PCA reading was significantly less accurate and slower than tAD patients and controls, with performance significantly adversely affected by increased letter spacing, size, length and font (cursive < non-cursive), and characterised by visual errors (69% of all error responses). By contrast, tAD and control accuracy rates were at or near ceiling, letter spacing was the only perceptual factor to influence reading speed in the same direction as controls, and, in contrast to PCA patients, control reading was faster for larger font sizes. The inverse size effect in PCA (less accurate reading of large than small font size print) was associated with lower grey matter volume in the right superior parietal lobule. Reading accuracy was associated with impairments of early visual (especially crowding), visuoperceptual and visuospatial processes. However, these deficits were not causally related to a universal impairment of reading as some patients showed preserved reading for small, unspaced words despite grave visual deficits. Rather, the impact of specific types of visual dysfunction on reading was found to be (con)text specific, being particularly evident for large, spaced, lengthy words. These findings improve the characterisation of dyslexia in PCA, shed light on the causative and associative factors, and provide clear direction for the development of reading aids and strategies to maximise and sustain reading ability in the early stages of disease. Copyright © 2014. Published by Elsevier Ltd.
(Con)text-specific effects of visual dysfunction on reading in posterior cortical atrophy
Yong, Keir X.X.; Shakespeare, Timothy J.; Cash, Dave; Henley, Susie M.D.; Warren, Jason D.; Crutch, Sebastian J.
2014-01-01
Reading deficits are a common early feature of the degenerative syndrome posterior cortical atrophy (PCA) but are poorly understood even at the single word level. The current study evaluated the reading accuracy and speed of 26 PCA patients, 17 typical Alzheimer's disease (tAD) patients and 14 healthy controls on a corpus of 192 single words in which the following perceptual properties were manipulated systematically: inter-letter spacing, font size, length, font type, case and confusability. PCA reading was significantly less accurate and slower than tAD patients and controls, with performance significantly adversely affected by increased letter spacing, size, length and font (cursive < non-cursive), and characterised by visual errors (69% of all error responses). By contrast, tAD and control accuracy rates were at or near ceiling, letter spacing was the only perceptual factor to influence reading speed in the same direction as controls, and, in contrast to PCA patients, control reading was faster for larger font sizes. The inverse size effect in PCA (less accurate reading of large than small font size print) was associated with lower grey matter volume in the right superior parietal lobule. Reading accuracy was associated with impairments of early visual (especially crowding), visuoperceptual and visuospatial processes. However, these deficits were not causally related to a universal impairment of reading as some patients showed preserved reading for small, unspaced words despite grave visual deficits. Rather, the impact of specific types of visual dysfunction on reading was found to be (con)text specific, being particularly evident for large, spaced, lengthy words. These findings improve the characterisation of dyslexia in PCA, shed light on the causative and associative factors, and provide clear direction for the development of reading aids and strategies to maximise and sustain reading ability in the early stages of disease. PMID:24841985
NASA Technical Reports Server (NTRS)
Richards, J. T.; Mulavara, A. P.; Ruttley, T.; Peters, B. T.; Warren, L. E.; Bloomberg, J. J.
2006-01-01
We have previously shown that viewing simulated rotary self-motion during treadmill locomotion causes adaptive modification of the control of position and trajectory during over-ground locomotion, which functionally reflects adaptive changes in the sensorimotor integration of visual, vestibular, and proprioceptive cues (Mulavara et al., 2005). The objective of this study was to investigate how strategic changes in torso control during exposure to simulated rotary self-motion during treadmill walking influences adaptive modification of locomotor heading direction during over-ground stepping.
Manzone, Joseph; Heath, Matthew
2018-04-01
Reaching to a veridical target permits an egocentric spatial code (i.e., absolute limb and target position) to effect fast and effective online trajectory corrections supported via the visuomotor networks of the dorsal visual pathway. In contrast, a response entailing decoupled spatial relations between stimulus and response is thought to be primarily mediated via an allocentric code (i.e., the position of a target relative to another external cue) laid down by the visuoperceptual networks of the ventral visual pathway. Because the ventral stream renders a temporally durable percept, it is thought that an allocentric code does not support a primarily online mode of control, but instead supports a mode wherein a response is evoked largely in advance of movement onset via central planning mechanisms (i.e., offline control). Here, we examined whether reaches defined via ego- and allocentric visual coordinates are supported via distinct control modes (i.e., online versus offline). Participants performed target-directed and allocentric reaches in limb visible and limb-occluded conditions. Notably, in the allocentric task, participants reached to a location that matched the position of a target stimulus relative to a reference stimulus, and to examine online trajectory amendments, we computed the proportion of variance explained (i.e., R 2 values) by the spatial position of the limb at 75% of movement time relative to a response's ultimate movement endpoint. Target-directed trials performed with limb vision showed more online corrections and greater endpoint precision than their limb-occluded counterparts, which in turn were associated with performance metrics comparable to allocentric trials performed with and without limb vision. Accordingly, we propose that the absence of ego-motion cues (i.e., limb vision) and/or the specification of a response via an allocentric code renders motor output served via the 'slow' visuoperceptual networks of the ventral visual pathway.
Eye Movements Affect Postural Control in Young and Older Females
Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan
2016-01-01
Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412
Eye Movements Affect Postural Control in Young and Older Females.
Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan
2016-01-01
Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.
How do schizophrenia patients use visual information to decode facial emotion?
Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F
2011-09-01
Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.
Study of a direct visualization display tool for space applications
NASA Astrophysics Data System (ADS)
Pereira do Carmo, J.; Gordo, P. R.; Martins, M.; Rodrigues, F.; Teodoro, P.
2017-11-01
The study of a Direct Visualization Display Tool (DVDT) for space applications is reported. The review of novel technologies for a compact display tool is described. Several applications for this tool have been identified with the support of ESA astronauts and are presented. A baseline design is proposed. It consists mainly of OLEDs as image source; a specially designed optical prism as relay optics; a Personal Digital Assistant (PDA), with data acquisition card, as control unit; and voice control and simplified keyboard as interfaces. Optical analysis and the final estimated performance are reported. The system is able to display information (text, pictures or/and video) with SVGA resolution directly to the astronaut using a Field of View (FOV) of 20x14.5 degrees. The image delivery system is a monocular Head Mounted Display (HMD) that weights less than 100g. The HMD optical system has an eye pupil of 7mm and an eye relief distance of 30mm.
NASA Astrophysics Data System (ADS)
Yamada, Katsuhiko; Jikuya, Ichiro
2014-09-01
Singularity analysis and the steering logic of pyramid-type single gimbal control moment gyros are studied. First, a new concept of directional passability in a specified direction is introduced to investigate the structure of an elliptic singular surface. The differences between passability and directional passability are discussed in detail and are visualized for 0H, 2H, and 4H singular surfaces. Second, quadratic steering logic (QSL), a new steering logic for passing the singular surface, is investigated. The algorithm is based on the quadratic constrained quadratic optimization problem and is reduced to the Newton method by using Gröbner bases. The proposed steering logic is demonstrated through numerical simulations for both constant torque maneuvering examples and attitude control examples.
Platje, Evelien; Sterkenburg, Paula; Overbeek, Mathile; Kef, Sabina; Schuengel, Carlo
2018-01-23
Video-feedback Intervention to promote positive parenting-visual (VIPP-V) or visual-and-intellectual disability is an attachment-based intervention aimed at enhancing sensitive parenting and promoting positive parent-child relationships. A randomized controlled trial was conducted to assess the efficacy of VIPP-V for parents of children aged 1-5 with visual or visual-and-intellectual disabilities. A total of 37 dyads received only care-as-usual (CAU) and 40 received VIPP-V besides CAU. The parents receiving VIPP-V did not show increased parental sensitivity or parent-child interaction quality, however, their parenting self-efficacy increased. Moreover, the increase in parental self-efficacy predicted the increase in parent-child interaction. In conclusion, VIPP-V does not appear to directly improve the quality of contact between parent and child, but does contribute to the self-efficacy of parents to support and to comfort their child. Moreover, as parents experience their parenting as more positive, this may eventually lead to higher sensitive responsiveness and more positive parent-child interactions.
Buschman, Timothy J.; Miller, Earl K.
2009-01-01
Attention regulates the flood of sensory information into a manageable stream, and so understanding how attention is controlled is central to understanding cognition. Competing theories suggest visual search involves serial and/or parallel allocation of attention, but there is little direct, neural, evidence for either mechanism. Two monkeys were trained to covertly search an array for a target stimulus under visual search (endogenous) and pop-out (exogenous) conditions. Here we present neural evidence in the frontal eye fields (FEF) for serial, covert shifts of attention during search but not pop-out. Furthermore, attention shifts reflected in FEF spiking activity were correlated with 18–34 Hz oscillations in the local field potential, suggesting a ‘clocking’ signal. This provides direct neural evidence that primates can spontaneously adopt a serial search strategy and that these serial covert shifts of attention are directed by the FEF. It also suggests that neuron population oscillations may regulate the timing of cognitive processing. PMID:19679077
Neural correlates of tactile perception during pre-, peri-, and post-movement.
Juravle, Georgiana; Heed, Tobias; Spence, Charles; Röder, Brigitte
2016-05-01
Tactile information is differentially processed over the various phases of goal-directed movements. Here, event-related potentials (ERPs) were used to investigate the neural correlates of tactile and visual information processing during movement. Participants performed goal-directed reaches for an object placed centrally on the table in front of them. Tactile and visual stimulation (100 ms) was presented in separate trials during the different phases of the movement (i.e. preparation, execution, and post-movement). These stimuli were independently delivered to either the moving or resting hand. In a control condition, the participants only performed the movement, while omission (i.e. movement-only) ERPs were recorded. Participants were instructed to ignore the presence or absence of any sensory events and to concentrate solely on the execution of the movement. Enhanced ERPs were observed 80-200 ms after tactile stimulation, as well as 100-250 ms after visual stimulation: These modulations were greatest during the execution of the goal-directed movement, and they were effector based (i.e. significantly more negative for stimuli presented to the moving hand). Furthermore, ERPs revealed enhanced sensory processing during goal-directed movements for visual stimuli as well. Such enhanced processing of both tactile and visual information during the execution phase suggests that incoming sensory information is continuously monitored for a potential adjustment of the current motor plan. Furthermore, the results reported here also highlight a tight coupling between spatial attention and the execution of motor actions.
Cognitive programs: software for attention's executive
Tsotsos, John K.; Kruijne, Wouter
2014-01-01
What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention. PMID:25505430
Eye Velocity Gain Fields in MSTd During Optokinetic Stimulation
Brostek, Lukas; Büttner, Ulrich; Mustari, Michael J.; Glasauer, Stefan
2015-01-01
Lesion studies argue for an involvement of cortical area dorsal medial superior temporal area (MSTd) in the control of optokinetic response (OKR) eye movements to planar visual stimulation. Neural recordings during OKR suggested that MSTd neurons directly encode stimulus velocity. On the other hand, studies using radial visual flow together with voluntary smooth pursuit eye movements showed that visual motion responses were modulated by eye movement-related signals. Here, we investigated neural responses in MSTd during continuous optokinetic stimulation using an information-theoretic approach for characterizing neural tuning with high resolution. We show that the majority of MSTd neurons exhibit gain-field-like tuning functions rather than directly encoding one variable. Neural responses showed a large diversity of tuning to combinations of retinal and extraretinal input. Eye velocity-related activity was observed prior to the actual eye movements, reflecting an efference copy. The observed tuning functions resembled those emerging in a network model trained to perform summation of 2 population-coded signals. Together, our findings support the hypothesis that MSTd implements the visuomotor transformation from retinal to head-centered stimulus velocity signals for the control of OKR. PMID:24557636
Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory
NASA Technical Reports Server (NTRS)
Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.
2005-01-01
Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.
NASA Astrophysics Data System (ADS)
Sousa, Teresa; Amaral, Carlos; Andrade, João; Pires, Gabriel; Nunes, Urbano J.; Castelo-Branco, Miguel
2017-08-01
Objective. The achievement of multiple instances of control with the same type of mental strategy represents a way to improve flexibility of brain-computer interface (BCI) systems. Here we test the hypothesis that pure visual motion imagery of an external actuator can be used as a tool to achieve three classes of electroencephalographic (EEG) based control, which might be useful in attention disorders. Approach. We hypothesize that different numbers of imagined motion alternations lead to distinctive signals, as predicted by distinct motion patterns. Accordingly, a distinct number of alternating sensory/perceptual signals would lead to distinct neural responses as previously demonstrated using functional magnetic resonance imaging (fMRI). We anticipate that differential modulations should also be observed in the EEG domain. EEG recordings were obtained from twelve participants using three imagery tasks: imagery of a static dot, imagery of a dot with two opposing motions in the vertical axis (two motion directions) and imagery of a dot with four opposing motions in vertical or horizontal axes (four directions). The data were analysed offline. Main results. An increase of alpha-band power was found in frontal and central channels as a result of visual motion imagery tasks when compared with static dot imagery, in contrast with the expected posterior alpha decreases found during simple visual stimulation. The successful classification and discrimination between the three imagery tasks confirmed that three different classes of control based on visual motion imagery can be achieved. The classification approach was based on a support vector machine (SVM) and on the alpha-band relative spectral power of a small group of six frontal and central channels. Patterns of alpha activity, as captured by single-trial SVM closely reflected imagery properties, in particular the number of imagined motion alternations. Significance. We found a new mental task based on visual motion imagery with potential for the implementation of multiclass (3) BCIs. Our results are consistent with the notion that frontal alpha synchronization is related with high internal processing demands, changing with the number of alternation levels during imagery. Together, these findings suggest the feasibility of pure visual motion imagery tasks as a strategy to achieve multiclass control systems with potential for BCI and in particular, neurofeedback applications in non-motor (attentional) disorders.
NASA Astrophysics Data System (ADS)
Holt, Marla M.; Insley, Stephen J.; Southall, Brandon L.; Schusterman, Ronald J.
2005-09-01
While attempting to gain access to receptive females, male northern elephant seals form dominance hierarchies through multiple dyadic interactions involving visual and acoustic signals. These signals are both highly stereotyped and directional. Previous behavioral observations suggested that males attend to the directional cues of these signals. We used in situ vocal playbacks to test whether males attend to directional cues of the acoustic components of a competitors calls (i.e., variation in call spectra and source levels). Here, we will focus on playback methodology. Playback calls were multiple exemplars of a marked dominant male from an isolated area, recorded with a directional microphone and DAT recorder and edited into a natural sequence that controlled call amplitude. Control calls were recordings of ambient rookery sounds with the male calls removed. Subjects were 20 marked males (10 adults and 10 subadults) all located at An~o Nuevo, CA. Playback presentations, calibrated for sound-pressure level, were broadcast at a distance of 7 m from each subject. Most responses were classified into the following categories: visual orientation, postural change, calling, movement toward or away from the loudspeaker, and re-directed aggression. We also investigated developmental, hierarchical, and ambient noise variables that were thought to influence male behavior.
Jastorff, Jan; Clavagnier, Simon; Gergely, György; Orban, Guy A
2011-02-01
Performing goal-directed actions toward an object in accordance with contextual constraints, such as the presence or absence of an obstacle, has been widely used as a paradigm for assessing the capacity of infants or nonhuman primates to evaluate the rationality of others' actions. Here, we have used this paradigm in a functional magnetic resonance imaging experiment to visualize the cortical regions involved in the assessment of action rationality while controlling for visual differences in the displays and directly correlating magnetic resonance activity with rationality ratings. Bilateral middle temporal gyrus (MTG) regions, anterior to extrastriate body area and the human middle temporal complex, were involved in the visual evaluation of action rationality. These MTG regions are embedded in the superior temporal sulcus regions processing the kinematics of observed actions. Our results suggest that rationality is assessed initially by purely visual computations, combining the kinematics of the action with the physical constraints of the environmental context. The MTG region seems to be sensitive to the contingent relationship between a goal-directed biological action and its relevant environmental constraints, showing increased activity when the expected pattern of rational goal attainment is violated.
Directional control-response compatibility of joystick steered shuttle cars.
Burgess-Limerick, Robin; Zupanc, Christine M; Wallis, Guy
2012-01-01
Shuttle cars are an unusual class of vehicle operated in underground coal mines, sometimes in close proximity to pedestrians and steering errors may have very serious consequences. A directional control-response incompatibility has previously been described in shuttle cars which are controlled using a steering wheel oriented perpendicular to the direction of travel. Some other shuttle car operators are seated perpendicular to the direction of travel and steer the car via a seat mounted joystick. A virtual simulation was utilised to determine whether the steering arrangement in these vehicles maintains directional control-response compatibility. Twenty-four participants were randomly assigned to either a condition corresponding to this design (consistent direction), or a condition in which the directional steering response was reversed while driving in-bye (visual field compatible). Significantly less accurate steering performance was exhibited by the consistent direction group during the in-bye trials only. Shuttle cars which provide the joystick steering mechanism described here require operators to accommodate alternating compatible and incompatible directional control-response relationships with each change of car direction. A virtual simulation of an underground coal shuttle car demonstrates that the design incorporates a directional control-response incompatibility when driving the vehicle in one direction. This design increases the probability of operator error, with potential adverse safety and productivity consequences.
Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A
2014-12-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
2015-07-01
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line with the 'auditory-visual view' of auditory speech perception, which assumes that auditory speech recognition is optimized by using predictions from previously encoded speaker-specific audio-visual internal models. Copyright © 2015 Elsevier Ltd. All rights reserved.
Helicopter pilot estimation of self-altitude in a degraded visual environment
NASA Astrophysics Data System (ADS)
Crowley, John S.; Haworth, Loran A.; Szoboszlay, Zoltan P.; Lee, Alan G.
2000-06-01
The effect of night vision devices and degraded visual imagery on self-attitude perception is unknown. Thirteen Army aviators with normal vision flew five flights under various visual conditions in a modified AH-1 (Cobra) helicopter. Subjects estimated their altitude or flew to specified altitudes while flying a series of maneuvers. The results showed that subjects were better at detecting and controlling changes in altitude than they were at flying to or naming a specific altitude. In cruise flight and descent, the subjects tended to fly above the desired altitude, an error in the safe direction. While hovering, the direction of error was less predictable. In the low-level cruise flight scenario tested in this study, altitude perception was affected more by changes in image resolution than by changes in FOV or ocularity.
Visual grouping under isoluminant condition: impact of mental fatigue
NASA Astrophysics Data System (ADS)
Pladere, Tatjana; Bete, Diana; Skilters, Jurgis; Krumina, Gunta
2016-09-01
Instead of selecting arbitrary elements our visual perception prefers only certain grouping of information. There is ample evidence that the visual attention and perception is substantially impaired in the presence of mental fatigue. The question is how visual grouping, which can be considered a bottom-up controlled neuronal gain mechanism, is influenced. The main purpose of our study is to determine the influence of mental fatigue on visual grouping of definite information - color and configuration of stimuli in the psychophysical experiment. Individuals provided subjective data by filling in the questionnaire about their health and general feeling. The objective evidence was obtained in the specially designed visual search task were achromatic and chromatic isoluminant stimuli were used in order to avoid so called pop-out effect due to differences in light intensity. Each individual was instructed to define the symbols with aperture in the same direction in four tasks. The color component differed in the visual search tasks according to the goals of study. The results reveal that visual grouping is completed faster when visual stimuli have the same color and aperture direction. The shortest reaction time is in the evening. What is more, the results of reaction time suggest that the analysis of two grouping processes compete for selective attention in the visual system when similarity in color conflicts with similarity in configuration of stimuli. The described effect increases significantly in the presence of mental fatigue. But it does not have strong influence on the accuracy of task accomplishment.
Torsional ARC Effectively Expands the Visual Field in Hemianopia
Satgunam, PremNandhini; Peli, Eli
2012-01-01
Purpose Exotropia in congenital homonymous hemianopia has been reported to provide field expansion that is more useful when accompanied with harmonios anomalous retinal correspondence (HARC). Torsional strabismus with HARC provides a similar functional advantage. In a subject with hemianopia demonstrating a field expansion consistent with torsion we documented torsional strabismus and torsional HARC. Methods Monocular visual fields under binocular fixation conditions were plotted using a custom dichoptic visual field perimeter (DVF). The DVF was also modified to measure perceived visual directions under dissociated and associated conditions across the central 50° diameter field. The field expansion and retinal correspondence of a subject with torsional strabismus (along with exotropia and right hypertropia) with congenital homonymous hemianopia was compared to that of another exotropic subject with acquired homonymous hemianopia without torsion and to a control subject with minimal phoria. Torsional rotations of the eyes were calculated from fundus photographs and perimetry. Results Torsional ARC documented in the subject with congenital homonymous hemianopia provided a functional binocular field expansion up to 18°. Normal retinal correspondence was mapped for the full 50° visual field in the control subject and for the seeing field of the acquired homonymous hemianopia subject, limiting the functional field expansion benefit. Conclusions Torsional strabismus with ARC, when occurring with homonymous hemianopia provides useful field expansion in the lower and upper fields. Dichoptic perimetry permits documentation of ocular alignment (lateral, vertical and torsional) and perceived visual direction under binocular and monocular viewing conditions. Evaluating patients with congenital or early strabismus for HARC is useful when considering surgical correction, particularly in the presence of congenital homonymous hemianopia. PMID:22885782
Jenkinson, Paul M; Haggard, Patrick; Ferreira, Nicola C; Fotopoulou, Aikaterini
2013-07-01
The brain receives and synthesises information about the body from different modalities, coordinates and perspectives, and affords us with a coherent and stable sense of body ownership. We studied this sense in a somatoparaphrenic patient and three control patients, all with unilateral right-hemisphere lesions. We experimentally manipulated the visual perspective (direct- versus mirror-view) and spatial attention (drawn to peripersonal space versus extrapersonal space) in an experiment involving recognising one's own hand. The somatoparaphrenic patient denied limb ownership in all direct view trials, but viewing the hand via a mirror significantly increased ownership. The extent of this increase depended on spatial attention; when attention was drawn to the extrapersonal space (near-the-mirror) the patient showed a near perfect recognition of her arm in the mirror, while when attention was drawn to peripersonal space (near-the-body) the patient recognised her arm in only half the mirror trials. In a supplementary experiment, we used the Rubber Hand Illusion to manipulate the same factors in healthy controls. Ownership of the rubber hand occurred in both direct and mirror view, but shifting attention between peripersonal and extrapersonal space had no effect on rubber-hand ownership. We conclude that the isolation of visual perspectives on the body and the division of attention between two different locations is not sufficient to affect body ownership in healthy individuals and right hemisphere controls. However, in somatoparaphrenia, where first-person body ownership and stimulus-driven attention are impaired by lesions to a right-hemisphere ventral attentional-network, the body can nevertheless be recognised as one's own if perceived in a third-person visual perspective and particularly if top-down, spatial attention is directed away from peripersonal space. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directional control-response relationships for mining equipment.
Burgess-Limerick, R; Krupenia, V; Wallis, G; Pratim-Bannerjee, A; Steiner, L
2010-06-01
A variety of directional control-response relationships are currently found in mining equipment. Two experiments were conducted in a virtual environment to determine optimal direction control-response relationships in a wide variety of circumstances. Direction errors were measured as a function of control orientation (horizontal or vertical), location (left, front, right) and directional control-response relationships. The results confirm that the principles of consistent direction and visual field compatibility are applicable to the majority of situations. An exception is that fewer direction errors were observed when an upward movement of a horizontal lever or movement of a vertical lever away from the participants caused extension (lengthening) of the controlled device, regardless of whether the direction of movement of the control is consistent with the direction in which the extension occurs. Further, both the control of slew by horizontally oriented controls and the control of device movements in a frontal plane by the perpendicular movements of vertical levers were associated with relatively high rates of directional errors, regardless of the directional control-response relationship, and these situations should be avoided. STATEMENT OF RELEVANCE: The results are particularly applicable to the design of mining equipment such as drilling and bolting machines, and have been incorporated into MDG35.1 Guideline for bolting & drilling plant in mines (Industry & Investment NSW, 2010). The results are also relevant to the design of any equipment where vertical or horizontal levers are used to control the movement of equipment appendages, e.g. cranes mounted to mobile equipment and the like.
Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!
Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel
2014-01-01
Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.
Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!
Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel
2014-01-01
Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371
The role of vision in odor-plume tracking by walking and flying insects.
Willis, Mark A; Avondet, Jennifer L; Zheng, Elizabeth
2011-12-15
The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available.
The role of vision in odor-plume tracking by walking and flying insects
Willis, Mark A.; Avondet, Jennifer L.; Zheng, Elizabeth
2011-01-01
SUMMARY The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available. PMID:22116754
Hirashima, Masaya
2016-01-01
Abstract When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation. PMID:27275006
Hayashi, Takuji; Yokoi, Atsushi; Hirashima, Masaya; Nozaki, Daichi
2016-01-01
When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation.
Self-reflection Orients Visual Attention Downward
Liu, Yi; Tong, Yu; Li, Hong
2017-01-01
Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., “I am above others”). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context. PMID:28928694
Self-reflection Orients Visual Attention Downward.
Liu, Yi; Tong, Yu; Li, Hong
2017-01-01
Previous research has demonstrated abstract concepts associated with spatial location (e.g., God in the Heavens) could direct visual attention upward or downward, because thinking about the abstract concepts activates the corresponding vertical perceptual symbols. For self-concept, there are similar metaphors (e.g., "I am above others"). However, whether thinking about the self can induce visual attention orientation is still unknown. Therefore, the current study tested whether self-reflection can direct visual attention. Individuals often display the tendency of self-enhancement in social comparison, which reminds the individual of the higher position one possesses relative to others within the social environment. As the individual is the agent of the attention orientation, and high status tends to make an individual look down upon others to obtain a sense of pride, it was hypothesized that thinking about the self would lead to a downward attention orientation. Using reflection of personality traits and a target discrimination task, Study 1 found that, after self-reflection, visual attention was directed downward. Similar effects were also found after friend-reflection, with the level of downward attention being correlated with the likability rating scores of the friend. Thus, in Study 2, a disliked other was used as a control and the positive self-view was measured with above-average judgment task. We found downward attention orientation after self-reflection, but not after reflection upon the disliked other. Moreover, the attentional bias after self-reflection was correlated with above-average self-view. The current findings provide the first evidence that thinking about the self could direct visual-spatial attention downward, and suggest that this effect is probably derived from a positive self-view within the social context.
Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.
2012-01-01
Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286
Defever, Emmy; Reynvoet, Bert; Gebuis, Titia
2013-10-01
Researchers investigating numerosity processing manipulate the visual stimulus properties (e.g., surface). This is done to control for the confound between numerosity and its visual properties and should allow the examination of pure number processes. Nevertheless, several studies have shown that, despite different visual controls, visual cues remained to exert their influence on numerosity judgments. This study, therefore, investigated whether the impact of the visual stimulus manipulations on numerosity judgments is dependent on the task at hand (comparison task vs. same-different task) and whether this impact changes throughout development. In addition, we examined whether the influence of visual stimulus manipulations on numerosity judgments plays a role in the relation between performance on numerosity tasks and mathematics achievement. Our findings confirmed that the visual stimulus manipulations affect numerosity judgments; more important, we found that these influences changed with increasing age and differed between the comparison and the same-different tasks. Consequently, direct comparisons between numerosity studies using different tasks and age groups are difficult. No meaningful relationship between the performance on the comparison and same-different tasks and mathematics achievement was found in typically developing children, nor did we find consistent differences between children with and without mathematical learning disability (MLD). Copyright © 2013 Elsevier Inc. All rights reserved.
Dawidek, Mark T; Roach, Victoria A; Ott, Michael C; Wilson, Timothy D
A major challenge in laparoscopic surgery is the lack of depth perception. With the development and continued improvement of 3D video technology, the potential benefit of restoring 3D vision to laparoscopy has received substantial attention from the surgical community. Despite this, procedures conducted under 2D vision remain the standard of care, and trainees must become proficient in 2D laparoscopy. This study aims to determine whether incorporating 3D vision into a 2D laparoscopic simulation curriculum accelerates skill acquisition in novices. Postgraduate year-1 surgical specialty residents (n = 15) at the Schulich School of Medicine and Dentistry, at Western University were randomized into 1 of 2 groups. The control group practiced the Fundamentals of Laparoscopic Surgery peg-transfer task to proficiency exclusively under standard 2D laparoscopy conditions. The experimental group first practiced peg transfer under 3D direct visualization, with direct visualization of the working field. Upon reaching proficiency, this group underwent a perceptual switch, changing to standard 2D laparoscopy conditions, and once again trained to proficiency. Incorporating 3D direct visualization before training under standard 2D conditions significantly (p < 0.0.5) reduced the total training time to proficiency by 10.9 minutes or 32.4%. There was no difference in total number of repetitions to proficiency. Data were also used to generate learning curves for each respective training protocol. An adaptive learning approach, which incorporates 3D direct visualization into a 2D laparoscopic simulation curriculum, accelerates skill acquisition. This is in contrast to previous work, possibly owing to the proficiency-based methodology employed, and has implications for resource savings in surgical training. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Identifying cognitive distraction using steering wheel reversal rates.
Kountouriotis, Georgios K; Spyridakos, Panagiotis; Carsten, Oliver M J; Merat, Natasha
2016-11-01
The influence of driver distraction on driving performance is not yet well understood, but it can have detrimental effects on road safety. In this study, we examined the effects of visual and non-visual distractions during driving, using a high-fidelity driving simulator. The visual task was presented either at an offset angle on an in-vehicle screen, or on the back of a moving lead vehicle. Similar to results from previous studies in this area, non-visual (cognitive) distraction resulted in improved lane keeping performance and increased gaze concentration towards the centre of the road, compared to baseline driving, and further examination of the steering control metrics indicated an increase in steering wheel reversal rates, steering wheel acceleration, and steering entropy. We show, for the first time, that when the visual task is presented centrally, drivers' lane deviation reduces (similar to non-visual distraction), whilst measures of steering control, overall, indicated more steering activity, compared to baseline. When using a visual task that required the diversion of gaze to an in-vehicle display, but without a manual element, lane keeping performance was similar to baseline driving. Steering wheel reversal rates were found to adequately tease apart the effects of non-visual distraction (increase of 0.5° reversals) and visual distraction with offset gaze direction (increase of 2.5° reversals). These findings are discussed in terms of steering control during different types of in-vehicle distraction, and the possible role of manual interference by distracting secondary tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cogné, Mélanie; Auriacombe, Sophie; Vasa, Louise; Tison, François; Klinger, Evelyne; Sauzéon, Hélène; Joseph, Pierre-Alain; N Kaoua, Bernard
2018-05-01
To evaluate whether visual cues are helpful for virtual spatial navigation and memory in Alzheimer's disease (AD) and patients with mild cognitive impairment (MCI). 20 patients with AD, 18 patients with MCI and 20 age-matched healthy controls (HC) were included. Participants had to actively reproduce a path that included 5 intersections with one landmark at each intersection that they had seen previously during a learning phase. Three cueing conditions for navigation were offered: salient landmarks, directional arrows and a map. A path without additional visual stimuli served as control condition. Navigation time and number of trajectory mistakes were recorded. With the presence of directional arrows, no significant difference was found between groups concerning the number of trajectory mistakes and navigation time. The number of trajectory mistakes did not differ significantly between patients with AD and patients with MCI on the path with arrows, the path with salient landmarks and the path with a map. There were significant correlations between the number of trajectory mistakes under the arrow condition and executive tests, and between the number of trajectory mistakes under the salient landmark condition and memory tests. Visual cueing such as directional arrows and salient landmarks appears helpful for spatial navigation and memory tasks in patients with AD and patients with MCI. This study opens new research avenues for neuro-rehabilitation, such as the use of augmented reality in real-life settings to support the navigational capabilities of patients with MCI and patients with AD. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Woodgate, Joseph L; Buehlmann, Cornelia; Collett, Thomas S
2016-06-01
Bees and ants can control their direction of travel within a familiar landscape using the information available in the surrounding visual scene. To learn more about the visual cues that contribute to this directional control, we have examined how wood ants obtain direction from a single shape that is presented in an otherwise uniform panorama. Earlier experiments revealed that when an ant's goal is aligned with a point within a prominent shape, the ant is guided by a global property of the shape: it learns the relative areas of the shape that lie to its left and right when facing the goal and sets its path by keeping the proportions at the memorised value. This strategy cannot be applied when the direction of the goal lies outside the shape. To see whether a different global feature of the shape might guide ants under these conditions, we trained ants to follow a direction to a point outside a single shape and then analysed their direction of travel when they were presented with different shapes. The tests indicate that ants learn the retinal position of the centre of mass of the training shape when facing the goal and can then guide themselves by placing the centre of mass of training and test shapes in this learnt position. © 2016. Published by The Company of Biologists Ltd.
Crown-of-thorns starfish have true image forming vision.
Petie, Ronald; Garm, Anders; Hall, Michael R
2016-01-01
Photoreceptors have evolved numerous times giving organisms the ability to detect light and respond to specific visual stimuli. Studies into the visual abilities of the Asteroidea (Echinodermata) have recently shown that species within this class have a more developed visual sense than previously thought and it has been demonstrated that starfish use visual information for orientation within their habitat. Whereas image forming eyes have been suggested for starfish, direct experimental proof of true spatial vision has not yet been obtained. The behavioural response of the coral reef inhabiting crown-of-thorns starfish (Acanthaster planci) was tested in controlled aquarium experiments using an array of stimuli to examine their visual performance. We presented starfish with various black-and-white shapes against a mid-intensity grey background, designed such that the animals would need to possess true spatial vision to detect these shapes. Starfish responded to black-and-white rectangles, but no directional response was found to black-and-white circles, despite equal areas of black and white. Additionally, we confirmed that starfish were attracted to black circles on a white background when the visual angle is larger than 14°. When changing the grey tone of the largest circle from black to white, we found responses to contrasts of 0.5 and up. The starfish were attracted to the dark area's of the visual stimuli and were found to be both attracted and repelled by the visual targets. For crown-of-thorns starfish, visual cues are essential for close range orientation towards objects, such as coral boulders, in the wild. These visually guided behaviours can be replicated in aquarium conditions. Our observation that crown-of-thorns starfish respond to black-and-white shapes on a mid-intensity grey background is the first direct proof of true spatial vision in starfish and in the phylum Echinodermata.
Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M
2017-03-01
This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
Chieffi, Sergio; Messina, Giovanni; Messina, Antonietta; Villano, Ines; Monda, Vincenzo; Ambra, Ferdinando Ivano; Garofalo, Elisabetta; Romano, Felice; Mollica, Maria Pina; Monda, Marcellino; Iavarone, Alessandro
2017-01-01
Previous studies suggested that the occipitoparietal stream orients attention toward the near/lower space and is involved in immediate reaching, whereas the occipitotemporal stream orients attention toward the far/upper space and is involved in delayed reaching. In the present study, we investigated the role of the occipitotemporal stream in attention orienting and delayed reaching in a patient (GP) with bilateral damage to the occipitoparietal areas and optic ataxia. GP and healthy controls took part in three experiments. In the experiment 1, the participants bisected lines oriented along radial, vertical, and horizontal axes. GP bisected radial lines farther, and vertical lines more above, than the controls, consistent with an attentional bias toward the far/upper space and near/lower space neglect. The experiment 2 consisted of two tasks: (1) an immediate reaching task, in which GP reached target locations under visual control and (2) a delayed visual reaching task, in which GP and controls were asked to reach remembered target locations visually presented. We measured constant and variable distance and direction errors. In immediate reaching task, GP accurately reached target locations. In delayed reaching task, GP overshot remembered target locations, whereas the controls undershot them. Furthermore, variable errors were greater in GP than in the controls. In the experiment 3, GP and controls performed a delayed proprioceptive reaching task. Constant reaching errors did not differ between GP and the controls. However, variable direction errors were greater in GP than in the controls. We suggest that the occipitoparietal damage, and the relatively intact occipitotemporal region, produced in GP an attentional orienting bias toward the far/upper space (experiment 1). In turns, the attentional bias selectively shifted toward the far space remembered visual (experiment 2), but not proprioceptive (experiment 3), target locations. As a whole, these findings further support the hypothesis of an involvement of the occipitotemporal stream in delayed reaching. Furthermore, the observation that in both delayed reaching tasks the variable errors were greater in GP than in the controls suggested that in optic ataxia is present not only a visuo- but also a proprioceptivo-motor integration deficit. PMID:28620345
Cholinergic and serotonergic modulation of visual information processing in monkey V1.
Shimegi, Satoshi; Kimura, Akihiro; Sato, Akinori; Aoyama, Chisa; Mizuyama, Ryo; Tsunoda, Keisuke; Ueda, Fuyuki; Araki, Sera; Goya, Ryoma; Sato, Hiromichi
2016-09-01
The brain dynamically changes its input-output relationship depending on the behavioral state and context in order to optimize information processing. At the molecular level, cholinergic/monoaminergic transmitters have been extensively studied as key players for the state/context-dependent modulation of brain function. In this paper, we review how cortical visual information processing in the primary visual cortex (V1) of macaque monkey, which has a highly differentiated laminar structure, is optimized by serotonergic and cholinergic systems by examining anatomical and in vivo electrophysiological aspects to highlight their similarities and distinctions. We show that these two systems have a similar layer bias for axonal fiber innervation and receptor distribution. The common target sites are the geniculorecipient layers and geniculocortical fibers, where the appropriate gain control is established through a geniculocortical signal transformation. Both systems exert activity-dependent response gain control across layers, but in a manner consistent with the receptor subtype. The serotonergic receptors 5-HT1B and 5HT2A modulate the contrast-response curve in a manner consistent with bi-directional response gain control, where the sign (facilitation/suppression) is switched according to the firing rate and is complementary to the other. On the other hand, cholinergic nicotinic/muscarinic receptors exert mono-directional response gain control without a sign reversal. Nicotinic receptors increase the response magnitude in a multiplicative manner, while muscarinic receptors exert both suppressive and facilitative effects. We discuss the implications of the two neuromodulator systems in hierarchical visual signal processing in V1 on the basis of the developed laminar structure. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement
Hu, Bo; Knill, David C.
2012-01-01
Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567
Direct cortical control of 3D neuroprosthetic devices.
Taylor, Dawn M; Tillery, Stephen I Helms; Schwartz, Andrew B
2002-06-07
Three-dimensional (3D) movement of neuroprosthetic devices can be controlled by the activity of cortical neurons when appropriate algorithms are used to decode intended movement in real time. Previous studies assumed that neurons maintain fixed tuning properties, and the studies used subjects who were unaware of the movements predicted by their recorded units. In this study, subjects had real-time visual feedback of their brain-controlled trajectories. Cell tuning properties changed when used for brain-controlled movements. By using control algorithms that track these changes, subjects made long sequences of 3D movements using far fewer cortical units than expected. Daily practice improved movement accuracy and the directional tuning of these units.
Neural correlates of semantic associations in patients with schizophrenia.
Sass, Katharina; Heim, Stefan; Sachs, Olga; Straube, Benjamin; Schneider, Frank; Habel, Ute; Kircher, Tilo
2014-03-01
Patients with schizophrenia have semantic processing disturbances leading to expressive language deficits (formal thought disorder). The underlying pathology has been related to alterations in the semantic network and its neural correlates. Moreover, crossmodal processing, an important aspect of communication, is impaired in schizophrenia. Here we investigated specific processing abnormalities in patients with schizophrenia with regard to modality and semantic distance in a semantic priming paradigm. Fourteen patients with schizophrenia and fourteen demographically matched controls made visual lexical decisions on successively presented word-pairs (SOA = 350 ms) with direct or indirect relations, unrelated word-pairs, and pseudoword-target stimuli during fMRI measurement. Stimuli were presented in a unimodal (visual) or crossmodal (auditory-visual) fashion. On the neural level, the effect of semantic relation indicated differences (patients > controls) within the right angular gyrus and precuneus. The effect of modality revealed differences (controls > patients) within the left superior frontal, middle temporal, inferior occipital, right angular gyri, and anterior cingulate cortex. Semantic distance (direct vs. indirect) induced distinct activations within the left middle temporal, fusiform gyrus, right precuneus, and thalamus with patients showing fewer differences between direct and indirect word-pairs. The results highlight aberrant priming-related brain responses in patients with schizophrenia. Enhanced activation for patients possibly reflects deficits in semantic processes that might be caused by a delayed and enhanced spread of activation within the semantic network. Modality-specific decreases of activation in patients might be related to impaired perceptual integration. Those deficits could induce and increase the prominent symptoms of schizophrenia like impaired speech processing.
Control-display mapping in brain-computer interfaces.
Thurlings, Marieke E; van Erp, Jan B F; Brouwer, Anne-Marie; Blankertz, Benjamin; Werkhoven, Peter
2012-01-01
Event-related potential (ERP) based brain-computer interfaces (BCIs) employ differences in brain responses to attended and ignored stimuli. When using a tactile ERP-BCI for navigation, mapping is required between navigation directions on a visual display and unambiguously corresponding tactile stimuli (tactors) from a tactile control device: control-display mapping (CDM). We investigated the effect of congruent (both display and control horizontal or both vertical) and incongruent (vertical display, horizontal control) CDMs on task performance, the ERP and potential BCI performance. Ten participants attended to a target (determined via CDM), in a stream of sequentially vibrating tactors. We show that congruent CDM yields best task performance, enhanced the P300 and results in increased estimated BCI performance. This suggests a reduced availability of attentional resources when operating an ERP-BCI with incongruent CDM. Additionally, we found an enhanced N2 for incongruent CDM, which indicates a conflict between visual display and tactile control orientations. Incongruency in control-display mapping reduces task performance. In this study, brain responses, task and system performance are related to (in)congruent mapping of command options and the corresponding stimuli in a brain-computer interface (BCI). Directional congruency reduces task errors, increases available attentional resources, improves BCI performance and thus facilitates human-computer interaction.
Prefrontal cortex modulates posterior alpha oscillations during top-down guided visual perception
Helfrich, Randolph F.; Huang, Melody; Wilson, Guy; Knight, Robert T.
2017-01-01
Conscious visual perception is proposed to arise from the selective synchronization of functionally specialized but widely distributed cortical areas. It has been suggested that different frequency bands index distinct canonical computations. Here, we probed visual perception on a fine-grained temporal scale to study the oscillatory dynamics supporting prefrontal-dependent sensory processing. We tested whether a predictive context that was embedded in a rapid visual stream modulated the perception of a subsequent near-threshold target. The rapid stream was presented either rhythmically at 10 Hz, to entrain parietooccipital alpha oscillations, or arrhythmically. We identified a 2- to 4-Hz delta signature that modulated posterior alpha activity and behavior during predictive trials. Importantly, delta-mediated top-down control diminished the behavioral effects of bottom-up alpha entrainment. Simultaneous source-reconstructed EEG and cross-frequency directionality analyses revealed that this delta activity originated from prefrontal areas and modulated posterior alpha power. Taken together, this study presents converging behavioral and electrophysiological evidence for frontal delta-mediated top-down control of posterior alpha activity, selectively facilitating visual perception. PMID:28808023
Liu, Han-Hsuan
2016-01-01
Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. SIGNIFICANCE STATEMENT Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. PMID:27383604
Liu, Han-Hsuan; Cline, Hollis T
2016-07-06
Fragile X mental retardation protein (FMRP) is thought to regulate neuronal plasticity by limiting dendritic protein synthesis, but direct demonstration of a requirement for FMRP control of local protein synthesis during behavioral plasticity is lacking. Here we tested whether FMRP knockdown in Xenopus optic tectum affects local protein synthesis in vivo and whether FMRP knockdown affects protein synthesis-dependent visual avoidance behavioral plasticity. We tagged newly synthesized proteins by incorporation of the noncanonical amino acid azidohomoalanine and visualized them with fluorescent noncanonical amino acid tagging (FUNCAT). Visual conditioning and FMRP knockdown produce similar increases in FUNCAT in tectal neuropil. Induction of visual conditioning-dependent behavioral plasticity occurs normally in FMRP knockdown animals, but plasticity degrades over 24 h. These results indicate that FMRP affects visual conditioning-induced local protein synthesis and is required to maintain the visual conditioning-induced behavioral plasticity. Fragile X syndrome (FXS) is the most common form of inherited intellectual disability. Exaggerated dendritic protein synthesis resulting from loss of fragile X mental retardation protein (FMRP) is thought to underlie cognitive deficits in FXS, but no direct evidence has demonstrated that FMRP-regulated dendritic protein synthesis affects behavioral plasticity in intact animals. Xenopus tadpoles exhibit a visual avoidance behavior that improves with visual conditioning in a protein synthesis-dependent manner. We showed that FMRP knockdown and visual conditioning dramatically increase protein synthesis in neuronal processes. Furthermore, induction of visual conditioning-dependent behavioral plasticity occurs normally after FMRP knockdown, but performance rapidly deteriorated in the absence of FMRP. These studies show that FMRP negatively regulates local protein synthesis and is required to maintain visual conditioning-induced behavioral plasticity in vivo. Copyright © 2016 the authors 0270-6474/16/367325-15$15.00/0.
Yang, Jinfang; Wang, Qian; He, Fenfen; Ding, Yanxia; Sun, Qingyan; Hua, Tianmiao; Xi, Minmin
2016-01-01
Previous studies have reported inconsistent effects of dietary restriction (DR) on cortical inhibition. To clarify this issue, we examined the response properties of neurons in the primary visual cortex (V1) of DR and control groups of cats using in vivo extracellular single-unit recording techniques, and assessed the synthesis of inhibitory neurotransmitter GABA in the V1 of cats from both groups using immunohistochemical and Western blot techniques. Our results showed that the response of V1 neurons to visual stimuli was significantly modified by DR, as indicated by an enhanced selectivity for stimulus orientations and motion directions, decreased visually-evoked response, lowered spontaneous activity and increased signal-to-noise ratio in DR cats relative to control cats. Further, it was shown that, accompanied with these changes of neuronal responsiveness, GABA immunoreactivity and the expression of a key GABA-synthesizing enzyme GAD67 in the V1 were significantly increased by DR. These results demonstrate that DR may retard brain aging by increasing the intracortical inhibition effect and improve the function of visual cortical neurons in visual information processing. This DR-induced elevation of cortical inhibition may favor the brain in modulating energy expenditure based on food availability.
Sun, Qingyan; Hua, Tianmiao; Xi, Minmin
2016-01-01
Previous studies have reported inconsistent effects of dietary restriction (DR) on cortical inhibition. To clarify this issue, we examined the response properties of neurons in the primary visual cortex (V1) of DR and control groups of cats using in vivo extracellular single-unit recording techniques, and assessed the synthesis of inhibitory neurotransmitter GABA in the V1 of cats from both groups using immunohistochemical and Western blot techniques. Our results showed that the response of V1 neurons to visual stimuli was significantly modified by DR, as indicated by an enhanced selectivity for stimulus orientations and motion directions, decreased visually-evoked response, lowered spontaneous activity and increased signal-to-noise ratio in DR cats relative to control cats. Further, it was shown that, accompanied with these changes of neuronal responsiveness, GABA immunoreactivity and the expression of a key GABA-synthesizing enzyme GAD67 in the V1 were significantly increased by DR. These results demonstrate that DR may retard brain aging by increasing the intracortical inhibition effect and improve the function of visual cortical neurons in visual information processing. This DR-induced elevation of cortical inhibition may favor the brain in modulating energy expenditure based on food availability. PMID:26863207
Altered visual perception in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2013-09-01
The present study investigated the long-term consequences of ecstasy use on visual processes thought to reflect serotonergic functions in the occipital lobe. Evidence indicates that the main psychoactive ingredient in ecstasy (methylendioxymethamphetamine) causes long-term changes to the serotonin system in human users. Previous research has found that amphetamine-abstinent ecstasy users have disrupted visual processing in the occipital lobe which relies on serotonin, with researchers concluding that ecstasy broadens orientation tuning bandwidths. However, other processes may have accounted for these results. The aim of the present research was to determine if amphetamine-abstinent ecstasy users have changes in occipital lobe functioning, as revealed by two studies: a masking study that directly measured the width of orientation tuning bandwidths and a contour integration task that measured the strength of long-range connections in the visual cortex of drug users compared to controls. Participants were compared on the width of orientation tuning bandwidths (26 controls, 12 ecstasy users, 10 ecstasy + amphetamine users) and the strength of long-range connections (38 controls, 15 ecstasy user, 12 ecstasy + amphetamine users) in the occipital lobe. Amphetamine-abstinent ecstasy users had significantly broader orientation tuning bandwidths than controls and significantly lower contour detection thresholds (CDTs), indicating worse performance on the task, than both controls and ecstasy + amphetamine users. These results extend on previous research, which is consistent with the proposal that ecstasy may damage the serotonin system, resulting in behavioral changes on tests of visual perception processes which are thought to reflect serotonergic functions in the occipital lobe.
The priming function of in-car audio instruction.
Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh
2018-05-01
Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.
Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168
Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007
Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung
2017-01-01
Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.
Shape of magnifiers affects controllability in children with visual impairment.
Liebrand-Schurink, Joyce; Boonstra, F Nienke; van Rens, Ger H M B; Cillessen, Antonius H N; Meulenbroek, Ruud G J; Cox, Ralf F A
2016-12-01
This study aimed to examine the controllability of cylinder-shaped and dome-shaped magnifiers in young children with visual impairment. This study investigates goal-directed arm movements in low-vision aid use (stand and dome magnifier-like object) in a group of young children with visual impairment (n = 56) compared to a group of children with normal sight (n = 66). Children with visual impairment and children with normal sight aged 4-8 years executed two types of movements (cyclic and discrete) in two orientations (vertical or horizontal) over two distances (10 cm and 20 cm) with two objects resembling the size and shape of regularly prescribed stand and dome magnifiers. The visually impaired children performed slower movements than the normally sighted children. In both groups, the accuracy and speed of the reciprocal aiming movements improved significantly with age. Surprisingly, in both groups, the performance with the dome-shaped object was significantly faster (in the 10 cm condition and 20 cm condition with discrete movements) and more accurate (in the 20 cm condition) than with the stand-shaped object. From a controllability perspective, this study suggests that it is better to prescribe dome-shaped than cylinder-shaped magnifiers to young children with visual impairment. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Henderson, John M.; Nuthmann, Antje; Luke, Steven G.
2013-01-01
Recent research on eye movements during scene viewing has primarily focused on where the eyes fixate. But eye fixations also differ in their durations. Here we investigated whether fixation durations in scene viewing are under the direct and immediate control of the current visual input. Subjects freely viewed photographs of scenes in preparation…
3D elastic control for mobile devices.
Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal
2008-01-01
To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.
The Control of Social Attention from 1 to 4 Months
ERIC Educational Resources Information Center
Perra, Oliver; Gattis, Merideth
2010-01-01
The control of social attention during early infancy was investigated in two studies. In both studies, an adult turned towards one of two targets within the infant's immediate visual field. We tested: (a) whether infants were able to follow the direction of the adult's head turn; and (b) whether following a head turn was accompanied by further…
Knape, L; Hambraeus, A; Lytsy, B
2015-10-01
The adenosine triphosphate (ATP) method is widely accepted as a quality control method to complement visual assessment, in the specifications of requirements, when purchasing cleaning contractors in Swedish hospitals. To examine whether the amount of biological load, as measured by ATP on frequently touched near-patient surfaces, had been reduced after an intervention; to evaluate the correlation between visual assessment and ATP levels on the same surfaces; to identify aspects of the performance of the ATP method as a tool in evaluating hospital cleanliness. A prospective intervention study in three phases was carried out in a medical ward and an intensive care unit (ICU) at a regional hospital in mid-Sweden between 2012 and 2013. Existing cleaning procedures were defined and baseline tests were sampled by visual inspection and ATP measurements of ten frequently touched surfaces in patients' rooms before and after intervention. The intervention consisted of educating nursing staff about the importance of hospital cleaning and direct feedback of ATP levels before and after cleaning. The mixed model showed a significant decrease in ATP levels after the intervention (P < 0.001). Relative light unit values were lower in the ICU. Cleanliness as judged by visual assessments improved. In the logistic regression analysis, there was a significant association between visual assessments and ATP levels. Direct feedback of ATP levels, together with education and introduction of written cleaning protocols, were effective tools to improve cleanliness. Visual assessment correlated with the level of ATP but the correlation was not absolute. The ATP method could serve as an educational tool for staff, but is not enough to assess hospital cleanliness in general as only a limited part of a large area is covered. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Dusek, Wolfgang; Pierscionek, Barbara K; McClelland, Julie F
2010-05-25
To describe and compare visual function measures of two groups of school age children (6-14 years of age) attending a specialist eyecare practice in Austria; one group referred to the practice from educational assessment centres diagnosed with reading and writing difficulties and the other, a clinical age-matched control group. Retrospective clinical data from one group of subjects with reading difficulties (n = 825) and a clinical control group of subjects (n = 328) were examined.Statistical analysis was performed to determine whether any differences existed between visual function measures from each group (refractive error, visual acuity, binocular status, accommodative function and reading speed and accuracy). Statistical analysis using one way ANOVA demonstrated no differences between the two groups in terms of refractive error and the size or direction of heterophoria at distance (p > 0.05). Using predominately one way ANOVA and chi-square analyses, those subjects in the referred group were statistically more likely to have poorer distance visual acuity, an exophoric deviation at near, a lower amplitude of accommodation, reduced accommodative facility, reduced vergence facility, a reduced near point of convergence, a lower AC/A ratio and a slower reading speed than those in the clinical control group (p < 0.05). This study highlights the high proportions of visual function anomalies in a group of children with reading difficulties in an Austrian population. It confirms the importance of a full assessment of binocular visual status in order to detect and remedy these deficits in order to prevent the visual problems continuing to impact upon educational development.
A computational model of spatial visualization capacity.
Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A
2008-09-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.
NASA Technical Reports Server (NTRS)
Bathel, Brett F.; Danehy, Paul M.; Johansen, Craig T.; Ashcraft, Scott W.; Novak, Luke A.
2013-01-01
Numerical predictions of the Mars Science Laboratory reaction control system jets interacting with a Mach 10 hypersonic flow are compared to experimental nitric oxide planar laser-induced fluorescence data. The steady Reynolds Averaged Navier Stokes equations using the Baldwin-Barth one-equation turbulence model were solved using the OVERFLOW code. The experimental fluorescence data used for comparison consists of qualitative two-dimensional visualization images, qualitative reconstructed three-dimensional flow structures, and quantitative two-dimensional distributions of streamwise velocity. Through modeling of the fluorescence signal equation, computational flow images were produced and directly compared to the qualitative fluorescence data.
Visual motherese? Signal-to-noise ratios in toddler-directed television
Wass, Sam V; Smith, Tim J
2015-01-01
Younger brains are noisier information processing systems; this means that information for younger individuals has to allow clearer differentiation between those aspects that are required for the processing task in hand (the ‘signal’) and those that are not (the ‘noise’). We compared toddler-directed and adult-directed TV programmes (TotTV/ATV). We examined how low-level visual features (that previous research has suggested influence gaze allocation) relate to semantic information, namely the location of the character speaking in each frame. We show that this relationship differs between TotTV and ATV. First, we conducted Receiver Operator Characteristics analyses and found that feature congestion predicted speaking character location in TotTV but not ATV. Second, we used multiple analytical strategies to show that luminance differentials (flicker) predict face location more strongly in TotTV than ATV. Our results suggest that TotTV designers have intuited techniques for controlling toddler attention using low-level visual cues. The implications of these findings for structuring childhood learning experiences away from a screen are discussed. PMID:24702791
Visual motherese? Signal-to-noise ratios in toddler-directed television.
Wass, Sam V; Smith, Tim J
2015-01-01
Younger brains are noisier information processing systems; this means that information for younger individuals has to allow clearer differentiation between those aspects that are required for the processing task in hand (the 'signal') and those that are not (the 'noise'). We compared toddler-directed and adult-directed TV programmes (TotTV/ATV). We examined how low-level visual features (that previous research has suggested influence gaze allocation) relate to semantic information, namely the location of the character speaking in each frame. We show that this relationship differs between TotTV and ATV. First, we conducted Receiver Operator Characteristics analyses and found that feature congestion predicted speaking character location in TotTV but not ATV. Second, we used multiple analytical strategies to show that luminance differentials (flicker) predict face location more strongly in TotTV than ATV. Our results suggest that TotTV designers have intuited techniques for controlling toddler attention using low-level visual cues. The implications of these findings for structuring childhood learning experiences away from a screen are discussed. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
Design, Control and in Situ Visualization of Gas Nitriding Processes
Ratajski, Jerzy; Olik, Roman; Suszko, Tomasz; Dobrodziej, Jerzy; Michalski, Jerzy
2010-01-01
The article presents a complex system of design, in situ visualization and control of the commonly used surface treatment process: the gas nitriding process. In the computer design conception, analytical mathematical models and artificial intelligence methods were used. As a result, possibilities were obtained of the poly-optimization and poly-parametric simulations of the course of the process combined with a visualization of the value changes of the process parameters in the function of time, as well as possibilities to predict the properties of nitrided layers. For in situ visualization of the growth of the nitrided layer, computer procedures were developed which make use of the results of the correlations of direct and differential voltage and time runs of the process result sensor (magnetic sensor), with the proper layer growth stage. Computer procedures make it possible to combine, in the duration of the process, the registered voltage and time runs with the models of the process. PMID:22315536
Giuliano, Ryan J.; Karns, Christina M.; Neville, Helen J.; Hillyard, Steven A.
2015-01-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual’s capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70–90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals. PMID:25000526
Kinesthetic motor imagery modulates body sway.
Rodrigues, E C; Lemos, T; Gouvea, B; Volchan, E; Imbiriba, L A; Vargas, C D
2010-08-25
The aim of this study was to investigate the effect of imagining an action implicating the body axis in the kinesthetic and visual motor imagery modalities upon the balance control system. Body sway analysis (measurement of center of pressure, CoP) together with electromyography (EMG) recording and verbal evaluation of imagery abilities were obtained from subjects during four tasks, performed in the upright position: to execute bilateral plantar flexions; to imagine themselves executing bilateral plantar flexions (kinesthetic modality); to imagine someone else executing the same movement (visual modality), and to imagine themselves singing a song (as a control imagery task). Body sway analysis revealed that kinesthetic imagery leads to a general increase in CoP oscillation, as reflected by an enhanced area of displacement. This effect was also verified for the CoP standard deviation in the medial-lateral direction. An increase in the trembling displacement (equivalent to center of pressure minus center of gravity) restricted to the anterior-posterior direction was also observed to occur during kinesthetic imagery. The visual imagery task did not differ from the control (sing) task for any of the analyzed parameters. No difference in the subjects' ability to perform the imagery tasks was found. No modulation of EMG data were observed across imagery tasks, indicating that there was no actual execution during motor imagination. These results suggest that motor imagery performed in the kinesthetic modality evokes motor representations involved in balance control. Copyright (c)10 IBRO. Published by Elsevier Ltd. All rights reserved.
Effects of cholinergic deafferentation of the rhinal cortex on visual recognition memory in monkeys.
Turchi, Janita; Saunders, Richard C; Mishkin, Mortimer
2005-02-08
Excitotoxic lesion studies have confirmed that the rhinal cortex is essential for visual recognition ability in monkeys. To evaluate the mnemonic role of cholinergic inputs to this cortical region, we compared the visual recognition performance of monkeys given rhinal cortex infusions of a selective cholinergic immunotoxin, ME20.4-SAP, with the performance of monkeys given control infusions into this same tissue. The immunotoxin, which leads to selective cholinergic deafferentation of the infused cortex, yielded recognition deficits of the same magnitude as those produced by excitotoxic lesions of this region, providing the most direct demonstration to date that cholinergic activation of the rhinal cortex is essential for storing the representations of new visual stimuli and thereby enabling their later recognition.
The internal representation of head orientation differs for conscious perception and balance control
Dalton, Brian H.; Rasman, Brandon G.; Inglis, J. Timothy
2017-01-01
Key points We tested perceived head‐on‐feet orientation and the direction of vestibular‐evoked balance responses in passively and actively held head‐turned postures.The direction of vestibular‐evoked balance responses was not aligned with perceived head‐on‐feet orientation while maintaining prolonged passively held head‐turned postures. Furthermore, static visual cues of head‐on‐feet orientation did not update the estimate of head posture for the balance controller.A prolonged actively held head‐turned posture did not elicit a rotation in the direction of the vestibular‐evoked balance response despite a significant rotation in perceived angular head posture.It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Abstract Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head‐on‐feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head‐turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole‐body balance responses. Visual recalibration of head‐on‐feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular‐evoked balance response was not orthogonal to perceived head‐on‐feet orientation, regardless of the visual information provided. For prolonged head‐turned postures, balance responses consistent with actual head‐on‐feet posture occurred only during the active condition. Our results indicate that conscious perception of head‐on‐feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head‐on‐feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head‐on‐feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. PMID:28035656
Aging effect on step adjustments and stability control in visually perturbed gait initiation.
Sun, Ruopeng; Cui, Chuyi; Shea, John B
2017-10-01
Gait adaptability is essential for fall avoidance during locomotion. It requires the ability to rapidly inhibit original motor planning, select and execute alternative motor commands, while also maintaining the stability of locomotion. This study investigated the aging effect on gait adaptability and dynamic stability control during a visually perturbed gait initiation task. A novel approach was used such that the anticipatory postural adjustment (APA) during gait initiation were used to trigger the unpredictable relocation of a foot-size stepping target. Participants (10 young adults and 10 older adults) completed visually perturbed gait initiation in three adjustment timing conditions (early, intermediate, late; all extracted from the stereotypical APA pattern) and two adjustment direction conditions (medial, lateral). Stepping accuracy, foot rotation at landing, and Margin of Dynamic Stability (MDS) were analyzed and compared across test conditions and groups using a linear mixed model. Stepping accuracy decreased as a function of adjustment timing as well as stepping direction, with older subjects exhibited a significantly greater undershoot in foot placement to late lateral stepping. Late adjustment also elicited a reaching-like movement (i.e. foot rotation prior to landing in order to step on the target), regardless of stepping direction. MDS measures in the medial-lateral and anterior-posterior direction revealed both young and older adults exhibited reduced stability in the adjustment step and subsequent steps. However, young adults returned to stable gait faster than older adults. These findings could be useful for future study of screening deficits in gait adaptability and preventing falls. Copyright © 2017 Elsevier B.V. All rights reserved.
Dual processing of visual rotation for bipedal stance control.
Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene
2016-10-01
When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.
2010-01-01
Background European robins, Erithacus rubecula, show two types of directional responses to the magnetic field: (1) compass orientation that is based on radical pair processes and lateralized in favor of the right eye and (2) so-called 'fixed direction' responses that originate in the magnetite-based receptors in the upper beak. Both responses are light-dependent. Lateralization of the 'fixed direction' responses would suggest an interaction between the two magnetoreception systems. Results Robins were tested with either the right or the left eye covered or with both eyes uncovered for their orientation under different light conditions. With 502 nm turquoise light, the birds showed normal compass orientation, whereas they displayed an easterly 'fixed direction' response under a combination of 502 nm turquoise with 590 nm yellow light. Monocularly right-eyed birds with their left eye covered were oriented just as they were binocularly as controls: under turquoise in their northerly migratory direction, under turquoise-and-yellow towards east. The response of monocularly left-eyed birds differed: under turquoise light, they were disoriented, reflecting a lateralization of the magnetic compass system in favor of the right eye, whereas they continued to head eastward under turquoise-and-yellow light. Conclusion 'Fixed direction' responses are not lateralized. Hence the interactions between the magnetite-receptors in the beak and the visual system do not seem to involve the magnetoreception system based on radical pair processes, but rather other, non-lateralized components of the visual system. PMID:20707905
Wästlund, Erik; Shams, Poja; Otterbring, Tobias
2018-01-01
In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Eye movements during object recognition in visual agnosia.
Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe
2012-07-01
This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.
Orienting movements in area 9 identified by long-train ICMS.
Lanzilotto, M; Perciavalle, V; Lucchetti, C
2015-03-01
The effect of intracortical microstimulation has been studied in several cortical areas from motor to sensory areas. The frontal pole has received particular attention, and several microstimulation studies have been conducted in the frontal eye field, supplementary eye field, and the premotor ear-eye field, but no microstimulation studies concerning area 9 are currently available in the literature. In the present study, to fill up this gap, electrical microstimulation was applied to area 9 in two macaque monkeys using long-train pulses of 500-700-800 and 1,000 ms, during two different experimental conditions: a spontaneous condition, while the animals were not actively fixating on a visual target, and during a visual fixation task. In these experiments, we identified backward ear movements, goal-directed eye movements, and the development of head forces. Kinematic parameters for ear and eye movements overlapped in the spontaneous condition, but they were different during the visual fixation task. In this condition, ear and eye kinematics have an opposite behavior: movement amplitude, duration, and maximal and mean velocities increase during a visual fixation task for the ear, while they decrease for the eye. Therefore, a top-down visual attention engagement could modify the kinematic parameters for these two effectors. Stimulation with the longest train durations, i.e., 800/1,000 ms, evokes not only the highest eye amplitude, but also a significant development of head forces. In this research article, we propose a new vision of the frontal oculomotor fields, speculating a role for area 9 in the control of goal-directed orienting behaviors and gaze shift control.
Chien, Jung Hung; Mukherjee, Mukul; Siu, Ka-Chun; Stergiou, Nicholas
2016-05-01
When maintaining postural stability temporally under increased sensory conflict, a more rigid response is used where the available degrees of freedom are essentially frozen. The current study investigated if such a strategy is also utilized during more dynamic situations of postural control as is the case with walking. This study attempted to answer this question by using the Locomotor Sensory Organization Test (LSOT). This apparatus incorporates SOT inspired perturbations of the visual and the somatosensory system. Ten healthy young adults performed the six conditions of the traditional SOT and the corresponding six conditions on the LSOT. The temporal structure of sway variability was evaluated from all conditions. The results showed that in the anterior posterior direction somatosensory input is crucial for postural control for both walking and standing; visual input also had an effect but was not as prominent as the somatosensory input. In the medial lateral direction and with respect to walking, visual input has a much larger effect than somatosensory input. This is possibly due to the added contributions by peripheral vision during walking; in standing such contributions may not be as significant for postural control. In sum, as sensory conflict increases more rigid and regular sway patterns are found during standing confirming the previous results presented in the literature, however the opposite was the case with walking where more exploratory and adaptive movement patterns are present.
Forward/up directional incompatibilities during cursor placement within graphical user interfaces.
Phillips, James G; Triggs, Thomas J; Meehan, James W
2005-05-15
Within graphical user interfaces, an indirect relationship between display and control may lead to directional incompatibilities when a forward mouse movement codes upward cursor motions. However, this should not occur for left/right movements or direct cursor controllers (e.g. touch sensitive screens). In a four-choice reaction time task, 12 participants performed movements from a central start location to a target situated at one of four cardinal points (top, bottom, left, right). A 2 x 2 x 2 design varied directness of controller (moving cursor on computer screen or pen on graphics tablet), compatibility of orientation of cursor controller with screen (horizontal or vertical) and axis of desired cursor motion (left/right or up/down). Incompatibility between orientation of controller and motion of cursor did not affect response latencies, possibly because both forward and upward movements are away from the midline and go up the visual field. However, directional incompatibilities between display and controller led to slower movement with prolonged accelerative phases. Indirect relationships between display and control led to less efficient movements with prolonged decelerative phases and a tendency to undershoot movements along the bottom/top axis. More direct cursor control devices, such as touch sensitive screens, should enhance the efficiency of aspects of cursor trajectories.
Keshner, E A; Dhaher, Y
2008-07-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29-31 years) and 3 visually sensitive (27-57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a three-dimensional model of joint motion was developed to examine gross head motion in three planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field could modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms.
NASA Astrophysics Data System (ADS)
Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru
2017-11-01
The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.
Visual Illusions and the Control of Ball Placement in Goal-Directed Hitting
ERIC Educational Resources Information Center
Caljouw, Simone R.; Van der Kamp, John; Savelsbergh, Geert J. P.
2010-01-01
When hitting, kicking, or throwing balls at targets, online control in the target area is impossible. We assumed this lack of late corrections in the target area would induce an effect of a single-winged Muller-Lyer illusion on ball placement. After extensive practice in hitting balls to different landing locations, participants (N = 9) had to hit…
Adaptive Control Responses to Behavioral Perturbation Based Upon the Insect
2006-11-01
the legs. Visual Sensors Antennal Mechanosensors Antennal Chemosensors Descending Interneurons Controlling Yaw...animals, the antenna were moved back and forth several times with servo motors to identify units that respond to antennal movement in either direction or...role of antennal postures and movements in plume tracking behavior. To date, results have shown that male moths tracking plumes in different wind
Huang, Chien-Ting; Hwang, Ing-Shiou
2012-01-01
Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Touch to see: neuropsychological evidence of a sensory mirror system for touch.
Bolognini, Nadia; Olgiati, Elena; Xaiz, Annalisa; Posteraro, Lucio; Ferraro, Francesco; Maravita, Angelo
2012-09-01
The observation of touch can be grounded in the activation of brain areas underpinning direct tactile experience, namely the somatosensory cortices. What is the behavioral impact of such a mirror sensory activity on visual perception? To address this issue, we investigated the causal interplay between observed and felt touch in right brain-damaged patients, as a function of their underlying damaged visual and/or tactile modalities. Patients and healthy controls underwent a detection task, comprising visual stimuli depicting touches or without a tactile component. Touch and No-touch stimuli were presented in egocentric or allocentric perspectives. Seeing touches, regardless of the viewing perspective, differently affects visual perception depending on which sensory modality is damaged: In patients with a selective visual deficit, but without any tactile defect, the sight of touch improves the visual impairment; this effect is associated with a lesion to the supramarginal gyrus. In patients with a tactile deficit, but intact visual perception, the sight of touch disrupts visual processing, inducing a visual extinction-like phenomenon. This disruptive effect is associated with the damage of the postcentral gyrus. Hence, a damage to the somatosensory system can lead to a dysfunctional visual processing, and an intact somatosensory processing can aid visual perception.
Adaptive Acceleration of Visually Evoked Smooth Eye Movements in Mice
2016-01-01
The optokinetic response (OKR) consists of smooth eye movements following global motion of the visual surround, which suppress image slip on the retina for visual acuity. The effective performance of the OKR is limited to rather slow and low-frequency visual stimuli, although it can be adaptably improved by cerebellum-dependent mechanisms. To better understand circuit mechanisms constraining OKR performance, we monitored how distinct kinematic features of the OKR change over the course of OKR adaptation, and found that eye acceleration at stimulus onset primarily limited OKR performance but could be dramatically potentiated by visual experience. Eye acceleration in the temporal-to-nasal direction depended more on the ipsilateral floccular complex of the cerebellum than did that in the nasal-to-temporal direction. Gaze-holding following the OKR was also modified in parallel with eye-acceleration potentiation. Optogenetic manipulation revealed that synchronous excitation and inhibition of floccular complex Purkinje cells could effectively accelerate eye movements in the nasotemporal and temporonasal directions, respectively. These results collectively delineate multiple motor pathways subserving distinct aspects of the OKR in mice and constrain hypotheses regarding cellular mechanisms of the cerebellum-dependent tuning of movement acceleration. SIGNIFICANCE STATEMENT Although visually evoked smooth eye movements, known as the optokinetic response (OKR), have been studied in various species for decades, circuit mechanisms of oculomotor control and adaptation remain elusive. In the present study, we assessed kinematics of the mouse OKR through the course of adaptation training. Our analyses revealed that eye acceleration at visual-stimulus onset primarily limited working velocity and frequency range of the OKR, yet could be dramatically potentiated during OKR adaptation. Potentiation of eye acceleration exhibited different properties between the nasotemporal and temporonasal OKRs, indicating distinct visuomotor circuits underlying the two. Lesions and optogenetic manipulation of the cerebellum provide constraints on neural circuits mediating visually driven eye acceleration and its adaptation. PMID:27335412
Ninu, Andrei; Dosen, Strahinja; Muceli, Silvia; Rattay, Frank; Dietl, Hans; Farina, Dario
2014-09-01
In closed-loop control of grasping by hand prostheses, the feedback information sent to the user is usually the actual controlled variable, i.e., the grasp force. Although this choice is intuitive and logical, the force production is only the last step in the process of grasping. Therefore, this study evaluated the performance in controlling grasp strength using a hand prosthesis operated through a complete grasping sequence while varying the feedback variables (e.g., closing velocity, grasping force), which were provided to the user visually or through vibrotactile stimulation. The experiments were conducted on 13 volunteers who controlled the Otto Bock Sensor Hand Speed prosthesis. Results showed that vibrotactile patterns were able to replace the visual feedback. Interestingly, the experiments demonstrated that direct force feedback was not essential for the control of grasping force. The subjects were indeed able to control the grip strength, predictively, by estimating the grasping force from the prosthesis velocity of closing. Therefore, grasping without explicit force feedback is not completely blind, contrary to what is usually assumed. In our study we analyzed grasping with a specific prosthetic device, but the outcomes are also applicable for other devices, with one or more degrees-of-freedom. The necessary condition is that the electromyography (EMG) signal directly and proportionally controls the velocity/grasp force of the hand, which is a common approach among EMG controlled prosthetic devices. The results provide important indications on the design of closed-loop EMG controlled prosthetic systems.
The analysis of image motion by the rabbit retina
Oyster, C. W.
1968-01-01
1. Micro-electrode recordings were made from rabbit retinal ganglion cells or their axons. Of particular interest were direction-selective units; the common on—off type represented 20·6% of the total sample (762 units), and the on-type comprised 5% of the total. 2. From the large sample of direction-selective units, it was found that on—off units were maximally sensitive to only four directions of movement; these directions, in the visual field, were, roughly, anterior, superior, posterior and inferior. The on-type units were maximally sensitive to only three directions: anterior, superior and inferior. 3. The direction-selective unit's responses vary with stimulus velocity; both unit types are more sensitive to velocity change than to absolute speed. On—off units respond to movement at speeds from 6′/sec to 10°/sec; the on-type units responded as slowly as 30″/sec up to about 2°/sec. On-type units are clearly slow-movement detectors. 4. The distribution of direction-selective units depends on the retinal locality. On—off units are more common outside the `visual streak' (area centralis) than within it, while the reverse is true for the on-type units. 5. A stimulus configuration was found which would elicit responses from on-type units when the stimulus was moved in the null direction. This `paradoxical response' was shown to be associated with the silent receptive field surround. 6. The four preferred directions of the on—off units were shown to correspond to the directions of retinal image motion produced by contractions of the four rectus eye muscles. This fact, combined with data on velocity sensitivity and retinal distribution of on—off units, suggests that the on—off units are involved in control of reflex eye movements. 7. The on—off direction-selective units may provide error signals to a visual servo system which minimizes retinal image motion. This hypothesis agrees with the known characteristics of the rabbit's visual following reflexes, specifically, the slow phase of optokinetic nystagmus. PMID:5710424
What Drives Bird Vision? Bill Control and Predator Detection Overshadow Flight.
Martin, Graham R
2017-01-01
Although flight is regarded as a key behavior of birds this review argues that the perceptual demands for its control are met within constraints set by the perceptual demands of two other key tasks: the control of bill (or feet) position, and the detection of food items/predators. Control of bill position, or of the feet when used in foraging, and timing of their arrival at a target, are based upon information derived from the optic flow-field in the binocular region that encompasses the bill. Flow-fields use information extracted from close to the bird using vision of relatively low spatial resolution. The detection of food items and predators is based upon information detected at a greater distance and depends upon regions in the retina with relatively high spatial resolution. The tasks of detecting predators and of placing the bill (or feet) accurately, make contradictory demands upon vision and these have resulted in trade-offs in the form of visual fields and in the topography of retinal regions in which spatial resolution is enhanced, indicated by foveas, areas, and high ganglion cell densities. The informational function of binocular vision in birds does not lie in binocularity per se (i.e., two eyes receiving slightly different information simultaneously about the same objects) but in the contralateral projection of the visual field of each eye. This ensures that each eye receives information from a symmetrically expanding optic flow-field centered close to the direction of the bill, and from this the crucial information of direction of travel and time-to-contact can be extracted, almost instantaneously. Interspecific comparisons of visual fields between closely related species have shown that small differences in foraging techniques can give rise to different perceptual challenges and these have resulted in differences in visual fields even within the same genus. This suggests that vision is subject to continuing and relatively rapid natural selection based upon individual differences in the structure of the optical system, retinal topography, and eye position in the skull. From a sensory ecology perspective a bird is best characterized as "a bill guided by an eye" and that control of flight is achieved within constraints on visual capacity dictated primarily by the demands of foraging and bill control.
What Drives Bird Vision? Bill Control and Predator Detection Overshadow Flight
Martin, Graham R.
2017-01-01
Although flight is regarded as a key behavior of birds this review argues that the perceptual demands for its control are met within constraints set by the perceptual demands of two other key tasks: the control of bill (or feet) position, and the detection of food items/predators. Control of bill position, or of the feet when used in foraging, and timing of their arrival at a target, are based upon information derived from the optic flow-field in the binocular region that encompasses the bill. Flow-fields use information extracted from close to the bird using vision of relatively low spatial resolution. The detection of food items and predators is based upon information detected at a greater distance and depends upon regions in the retina with relatively high spatial resolution. The tasks of detecting predators and of placing the bill (or feet) accurately, make contradictory demands upon vision and these have resulted in trade-offs in the form of visual fields and in the topography of retinal regions in which spatial resolution is enhanced, indicated by foveas, areas, and high ganglion cell densities. The informational function of binocular vision in birds does not lie in binocularity per se (i.e., two eyes receiving slightly different information simultaneously about the same objects) but in the contralateral projection of the visual field of each eye. This ensures that each eye receives information from a symmetrically expanding optic flow-field centered close to the direction of the bill, and from this the crucial information of direction of travel and time-to-contact can be extracted, almost instantaneously. Interspecific comparisons of visual fields between closely related species have shown that small differences in foraging techniques can give rise to different perceptual challenges and these have resulted in differences in visual fields even within the same genus. This suggests that vision is subject to continuing and relatively rapid natural selection based upon individual differences in the structure of the optical system, retinal topography, and eye position in the skull. From a sensory ecology perspective a bird is best characterized as “a bill guided by an eye” and that control of flight is achieved within constraints on visual capacity dictated primarily by the demands of foraging and bill control. PMID:29163020
Contextual effects on smooth-pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-02-01
Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.
Ecological Interface Design for Computer Network Defense.
Bennett, Kevin B; Bryant, Adam; Sushereba, Christen
2018-05-01
A prototype ecological interface for computer network defense (CND) was developed. Concerns about CND run high. Although there is a vast literature on CND, there is some indication that this research is not being translated into operational contexts. Part of the reason may be that CND has historically been treated as a strictly technical problem, rather than as a socio-technical problem. The cognitive systems engineering (CSE)/ecological interface design (EID) framework was used in the analysis and design of the prototype interface. A brief overview of CSE/EID is provided. EID principles of design (i.e., direct perception, direct manipulation and visual momentum) are described and illustrated through concrete examples from the ecological interface. Key features of the ecological interface include (a) a wide variety of alternative visual displays, (b) controls that allow easy, dynamic reconfiguration of these displays, (c) visual highlighting of functionally related information across displays, (d) control mechanisms to selectively filter massive data sets, and (e) the capability for easy expansion. Cyber attacks from a well-known data set are illustrated through screen shots. CND support needs to be developed with a triadic focus (i.e., humans interacting with technology to accomplish work) if it is to be effective. Iterative design and formal evaluation is also required. The discipline of human factors has a long tradition of success on both counts; it is time that HF became fully involved in CND. Direct application in supporting cyber analysts.
Pareidolias: complex visual illusions in dementia with Lewy bodies.
Uchiyama, Makoto; Nishio, Yoshiyuki; Yokoi, Kayoko; Hirayama, Kazumi; Imamura, Toru; Shimomura, Tatsuo; Mori, Etsuro
2012-08-01
Patients rarely experience visual hallucinations while being observed by clinicians. Therefore, instruments to detect visual hallucinations directly from patients are needed. Pareidolias, which are complex visual illusions involving ambiguous forms that are perceived as meaningful objects, are analogous to visual hallucinations and have the potential to be a surrogate indicator of visual hallucinations. In this study, we explored the clinical utility of a newly developed instrument for evoking pareidolic illusions, the Pareidolia test, in patients with dementia with Lewy bodies-one of the most common causes of visual hallucinations in the elderly. Thirty-four patients with dementia with Lewy bodies, 34 patients with Alzheimer's disease and 26 healthy controls were given the Pareidolia test. Patients with dementia with Lewy bodies produced a much greater number of pareidolic illusions compared with those with Alzheimer's disease or controls. A receiver operating characteristic analysis demonstrated that the number of pareidolias differentiated dementia with Lewy bodies from Alzheimer's disease with a sensitivity of 100% and a specificity of 88%. Full-length figures and faces of people and animals accounted for >80% of the contents of pareidolias. Pareidolias were observed in patients with dementia with Lewy bodies who had visual hallucinations as well as those who did not have visual hallucinations, suggesting that pareidolias do not reflect visual hallucinations themselves but may reflect susceptibility to visual hallucinations. A sub-analysis of patients with dementia with Lewy bodies who were or were not treated with donepzil demonstrated that the numbers of pareidolias were correlated with visuoperceptual abilities in the former and with indices of hallucinations and delusional misidentifications in the latter. Arousal and attentional deficits mediated by abnormal cholinergic mechanisms and visuoperceptual dysfunctions are likely to contribute to the development of visual hallucinations and pareidolias in dementia with Lewy bodies.
Pareidolias: complex visual illusions in dementia with Lewy bodies
Uchiyama, Makoto; Yokoi, Kayoko; Hirayama, Kazumi; Imamura, Toru; Shimomura, Tatsuo; Mori, Etsuro
2012-01-01
Patients rarely experience visual hallucinations while being observed by clinicians. Therefore, instruments to detect visual hallucinations directly from patients are needed. Pareidolias, which are complex visual illusions involving ambiguous forms that are perceived as meaningful objects, are analogous to visual hallucinations and have the potential to be a surrogate indicator of visual hallucinations. In this study, we explored the clinical utility of a newly developed instrument for evoking pareidolic illusions, the Pareidolia test, in patients with dementia with Lewy bodies—one of the most common causes of visual hallucinations in the elderly. Thirty-four patients with dementia with Lewy bodies, 34 patients with Alzheimer’s disease and 26 healthy controls were given the Pareidolia test. Patients with dementia with Lewy bodies produced a much greater number of pareidolic illusions compared with those with Alzheimer’s disease or controls. A receiver operating characteristic analysis demonstrated that the number of pareidolias differentiated dementia with Lewy bodies from Alzheimer’s disease with a sensitivity of 100% and a specificity of 88%. Full-length figures and faces of people and animals accounted for >80% of the contents of pareidolias. Pareidolias were observed in patients with dementia with Lewy bodies who had visual hallucinations as well as those who did not have visual hallucinations, suggesting that pareidolias do not reflect visual hallucinations themselves but may reflect susceptibility to visual hallucinations. A sub-analysis of patients with dementia with Lewy bodies who were or were not treated with donepzil demonstrated that the numbers of pareidolias were correlated with visuoperceptual abilities in the former and with indices of hallucinations and delusional misidentifications in the latter. Arousal and attentional deficits mediated by abnormal cholinergic mechanisms and visuoperceptual dysfunctions are likely to contribute to the development of visual hallucinations and pareidolias in dementia with Lewy bodies. PMID:22649179
She, Hoi Lam; Roest, Arno A W; Calkoen, Emmeline E; van den Boogaard, Pieter J; van der Geest, Rob J; Hazekamp, Mark G; de Roos, Albert; Westenberg, Jos J M
2017-01-01
To evaluate the inflow pattern and flow quantification in patients with functional univentricular heart after Fontan's operation using 4D flow magnetic resonance imaging (MRI) with streamline visualization when compared with the conventional 2D flow approach. Seven patients with functional univentricular heart after Fontan's operation and twenty-three healthy controls underwent 4D flow MRI. In two orthogonal two-chamber planes, streamline visualization was applied, and inflow angles with peak inflow velocity (PIV) were measured. Transatrioventricular flow quantification was assessed using conventional 2D multiplanar reformation (MPR) and 4D MPR tracking the annulus and perpendicular to the streamline inflow at PIV, and they were validated with net forward aortic flow. Inflow angles at PIV in the patient group demonstrated wide variation of angles and directions when compared with the control group (P < .01). The use of 4D flow MRI with streamlines visualization in quantification of the transatrioventricular flow had smaller limits of agreement (2.2 ± 4.1 mL; 95% limit of agreement -5.9-10.3 mL) when compared with the static plane assessment from 2DFlow MRI (-2.2 ± 18.5 mL; 95% limit of agreement agreement -38.5-34.1 mL). Stronger correlation was present in the 4D flow between the aortic and trans-atrioventricular flow (R 2 correlation in 4D flow: 0.893; in 2D flow: 0.786). Streamline visualization in 4D flow MRI confirmed variable atrioventricular inflow directions in patients with functional univentricular heart with previous Fontan's procedure. 4D flow aided generation of measurement planes according to the blood flood dynamics and has proven to be more accurate than the fixed plane 2D flow measurements when calculating flow quantifications. © 2016 Wiley Periodicals, Inc.
Multisensory control of a straight locomotor trajectory.
Hanna, Maxim; Fung, Joyce; Lamontagne, Anouk
2017-01-01
Locomotor steering is contingent upon orienting oneself spatially in the environment. When the head is turned while walking, the optic flow projected onto the retina is a complex pattern comprising of a translational and a rotational component. We have created a unique paradigm to simulate different optic flows in a virtual environment. We hypothesized that non-visual (vestibular and somatosensory) cues are required for proper control of a straight trajectory while walking. This research study included 9 healthy young subjects walking in a large physical space (40×25m2) while the virtual environment is viewed in a helmet-mounted display. They were instructed to walk straight in the physical world while being exposed to three conditions: (1) self-initiated active head turns (AHT: 40° right, left, or none); (2) visually simulated head turns (SHT); and (3) visually simulated head turns with no target element (SHT_NT). Conditions 1 and 2 involved an eye-level target which subjects were instructed to fixate, whereas condition 3 was similar to condition 2 but with no target. Identical retinal flow patterns were present in the AHT and SHT conditions whereas non-visual cues differed in that a head rotation was sensed only in AHT but not in SHT. Body motions were captured by a 12-camera Vicon system. Horizontal orientations of the head and body segments, as well as the trajectory of the body's centre of mass were analyzed. SHT and SNT_NT yielded similar results. Heading and body segment orientations changed in the direction opposite to the head turns in SHT conditions. Heading remained unchanged across head turn directions in AHT. Results suggest that non-visual information is used in the control of heading while being exposed to changing rotational optic flows. The small magnitude of the changes in SHT conditions suggests that the CNS can re-weight relevant sources of information to minimize heading errors in the presence of sensory conflicts.
Learning visuomotor transformations for gaze-control and grasping.
Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf
2005-08-01
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.
Looking away from faces: influence of high-level visual processes on saccade programming.
Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika
2010-03-30
Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.
The role of visual processing in motor learning and control: Insights from electroencephalography.
Krigolson, Olav E; Cheng, Darian; Binsted, Gord
2015-05-01
Traditionally our understanding of goal-directed action been derived from either behavioral findings or neuroanatomically derived imaging (i.e., fMRI). While both of these approaches have proven valuable, they lack the ability to determine a direct locus of function while concurrently having the necessary temporal precision needed to understand millisecond scale neural interactions respectively. In this review we summarize some seminal behavioral findings across three broad areas (target perturbation, feed-forward control, and feedback processing) and for each discuss the application of electroencephalography (EEG) to the understanding of the temporal nature of visual cue utilization during movement planning, control, and learning using four existing scalp potentials. Specifically, we examine the appropriateness of using the N100 potential as an indicator of corrective behaviors in response to target perturbation, the N200 as an index of movement planning, the P300 potential as a metric of feed-forward processes, and the feedback-related negativity as an index of motor learning. Although these existing components have potential for insight into cognitive contributions and the timing of the neural processes that contribute to motor control further research is needed to expand the control-related potentials and to develop methods to permit their accurate characterization across a wide range of behavioral tasks. Copyright © 2015 Elsevier B.V. All rights reserved.
Harman, Francesca E; Corbett, Melanie C; Stevens, Julian D
2010-08-01
To evaluate differences in visual recovery after phacoemulsification with direct or tilted surgical microscope illumination using a macular photostress test. Western Eye Hospital, Imperial College Health Care National Health Service Trust, London, United Kingdom. This randomized double-masked controlled trial enrolled patients presenting to a daycare unit for single-eye cataract surgery. Inclusion criteria were no ocular pathology other than cataract, corneal keratometric astigmatism less than 1.50 diopters, intended target of emmetropia in the operated eye, and cataract grade 1 to 3 (Lens Opacification Classification System II). Exclusion criteria were an abnormal preoperative photostress test. Patients were randomized to have phacoemulsification with the operating microscope angled 15 degrees nasal to the fovea (study group) or with the operating microscope directly overhead around the optic disc region (control group). The same surgeon performed all phacoemulsification procedures using a standardized technique and topical anesthesia. Outcome measures were uncorrected (UDVA) and corrected (CDVA) distance visual acuity 10 minutes and 60 minutes postoperatively. In the 30 patients evaluated, the mean UDVA 10 minutes postoperatively was 0.40 logMAR +/- 0.26 (SD) in the study group and 0.72 +/- 0.36 logMAR in the control group (P<.01). The mean CDVA was 0.18 +/- 0.26 logMAR and 0.44 +/- 0.30 logMAR, respectively (P = .016). There was no significant between-group difference in acuity at 60 minutes. Tilting the microscope beam away from the fovea resulted in faster visual recovery and less macular photic stress. No author has a financial or proprietary interest in any material or method mentioned. Copyright 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Perceived state of self during motion can differentially modulate numerical magnitude allocation.
Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M
2016-09-01
Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Neural basis of forward flight control and landing in honeybees.
Ibbotson, M R; Hung, Y-S; Meffin, H; Boeddeker, N; Srinivasan, M V
2017-11-06
The impressive repertoire of honeybee visually guided behaviors, and their ability to learn has made them an important tool for elucidating the visual basis of behavior. Like other insects, bees perform optomotor course correction to optic flow, a response that is dependent on the spatial structure of the visual environment. However, bees can also distinguish the speed of image motion during forward flight and landing, as well as estimate flight distances (odometry), irrespective of the visual scene. The neural pathways underlying these abilities are unknown. Here we report on a cluster of descending neurons (DNIIIs) that are shown to have the directional tuning properties necessary for detecting image motion during forward flight and landing on vertical surfaces. They have stable firing rates during prolonged periods of stimulation and respond to a wide range of image speeds, making them suitable to detect image flow during flight behaviors. While their responses are not strictly speed tuned, the shape and amplitudes of their speed tuning functions are resistant to large changes in spatial frequency. These cells are prime candidates not only for the control of flight speed and landing, but also the basis of a neural 'front end' of the honeybee's visual odometer.
Visual search, visual streams, and visual architectures.
Green, M
1991-10-01
Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.
Brown, Franklin C; Roth, Robert M; Katz, Lynda J
2015-08-30
Attention Deficit Hyperactivity Disorder (ADHD) has often been conceptualized as arising executive dysfunctions (e.g., inattention, defective inhibition). However, recent studies suggested that cognitive inefficiency may underlie many ADHD symptoms, according to reaction time and processing speed abnormalities. This study explored whether a non-timed measure of cognitive inefficiency would also be abnormal. A sample of 23 ADHD subjects was compared to 23 controls on a test that included both egocentric and allocentric visual memory subtests. A factor analysis was used to determine which cognitive variables contributed to allocentric visual memory. The ADHD sample performed significantly lower on the allocentric but not egocentric conditions. Allocentric visual memory was not associated with timed, working memory, visual perception, or mental rotation variables. This paper concluded by discussing how these results supported a cognitive inefficiency explanation for some ADHD symptoms, and discussed future research directions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Zetterberg, Camilla; Richter, Hans O.; Forsman, Mikael
2015-01-01
Near work is associated with increased activity in the neck and shoulder muscles, but the underlying mechanism is still unknown. This study was designed to determine whether a dynamic change in focus, alternating between a nearby and a more distant visual target, produces a direct parallel change in trapezius muscle activity. Fourteen healthy controls and 12 patients with a history of visual and neck/shoulder symptoms performed a Near-Far visual task under three different viewing conditions; one neutral condition with no trial lenses, one condition with negative trial lenses to create increased accommodation, and one condition with positive trial lenses to create decreased accommodation. Eye lens accommodation and trapezius muscle activity were continuously recorded. The trapezius muscle activity was significantly higher during Near than during Far focusing periods for both groups within the neutral viewing condition, and there was a significant co-variation in time between accommodation and trapezius muscle activity within the neutral and positive viewing conditions for the control group. In conclusion, these results reveal a connection between Near focusing and increased muscle activity during dynamic changes in focus between a nearby and a far target. A direct link, from the accommodation/vergence system to the trapezius muscles cannot be ruled out, but the connection may also be explained by an increased need for eye-neck (head) stabilization when focusing on a nearby target as compared to a more distant target. PMID:25961299
Zetterberg, Camilla; Richter, Hans O; Forsman, Mikael
2015-01-01
Near work is associated with increased activity in the neck and shoulder muscles, but the underlying mechanism is still unknown. This study was designed to determine whether a dynamic change in focus, alternating between a nearby and a more distant visual target, produces a direct parallel change in trapezius muscle activity. Fourteen healthy controls and 12 patients with a history of visual and neck/shoulder symptoms performed a Near-Far visual task under three different viewing conditions; one neutral condition with no trial lenses, one condition with negative trial lenses to create increased accommodation, and one condition with positive trial lenses to create decreased accommodation. Eye lens accommodation and trapezius muscle activity were continuously recorded. The trapezius muscle activity was significantly higher during Near than during Far focusing periods for both groups within the neutral viewing condition, and there was a significant co-variation in time between accommodation and trapezius muscle activity within the neutral and positive viewing conditions for the control group. In conclusion, these results reveal a connection between Near focusing and increased muscle activity during dynamic changes in focus between a nearby and a far target. A direct link, from the accommodation/vergence system to the trapezius muscles cannot be ruled out, but the connection may also be explained by an increased need for eye-neck (head) stabilization when focusing on a nearby target as compared to a more distant target.
The subtlety of simple eyes: the tuning of visual fields to perceptual challenges in birds
Martin, Graham R.
2014-01-01
Birds show interspecific variation both in the size of the fields of individual eyes and in the ways that these fields are brought together to produce the total visual field. Variation is found in the dimensions of all main parameters: binocular region, cyclopean field and blind areas. There is a phylogenetic signal with respect to maximum width of the binocular field in that passerine species have significantly broader field widths than non-passerines; broadest fields are found among crows (Corvidae). Among non-passerines, visual fields show considerable variation within families and even within some genera. It is argued that (i) the main drivers of differences in visual fields are associated with perceptual challenges that arise through different modes of foraging, and (ii) the primary function of binocularity in birds lies in the control of bill position rather than in the control of locomotion. The informational function of binocular vision does not lie in binocularity per se (two eyes receiving slightly different information simultaneously about the same objects from which higher-order depth information is extracted), but in the contralateral projection of the visual field of each eye. Contralateral projection ensures that each eye receives information from a symmetrically expanding optic flow-field from which direction of travel and time to contact targets can be extracted, particularly with respect to the control of bill position. PMID:24395967
Vision for perception and vision for action in the primate brain.
Goodale, M A
1998-01-01
Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.
Burnat, K; Zernicki, B
1997-01-01
We used 5 binocularly deprived cats (BD cats), 4 control cats reared also in the laboratory (C cats) and 4 cats reared in a normal environment (N cats). The cats were trained to discriminate an upward or downward-moving light spot versus a stationary spot (detection task) and then an upward versus a downward spot (direction task). The N and C cats learned slowly. The learning was slower than in previously studied discriminations of stationary stimuli. However, all N and C cats mastered the detection task and except one C cat the direction task. In contrast, 4 BD cats failed in the detection task and all in the direction task. This result is consistent with single-cell recording data showing impairment of direction analysis in the visual system in BD cats. After completing the training the upper part of the middle suprasylvian sulcus was removed unilaterally in 7 cats and bilaterally in 6 cats. Surprisingly, the unilateral lesions were more effective: the clear-cut retention deficits were found in 5 cats lesioned unilaterally, whereas only in one cat lesioned bilaterally.
Elevator Illusion and Gaze Direction in Hypergravity
NASA Technical Reports Server (NTRS)
Cohen, Malcolm M.; Hargens, Alan (Technical Monitor)
1995-01-01
A luminous visual target in a dark hypergravity (Gz greater than 1) environment appears to be elevated above its true physical position. This "elevator illusion" has been attributed to changes in oculomotor control caused by increased stimulation of the otolith organs. Data relating the magnitude of the illusion to the magnitude of the changes in oculomotor control have been lacking. The present study provides such data.
Hatzitaki, V; Voudouris, D; Nikodelis, T; Amiridis, I G
2009-02-01
The study examined the impact of visually guided weight shifting (WS) practice on the postural adjustments evoked by elderly women when avoiding collision with a moving obstacle while standing. Fifty-six healthy elderly women (70.9+/-5.7 years, 87.5+/-9.6 kg) were randomly assigned into one of three groups: a group that completed 12 sessions (25 min, 3s/week) of WS practice in the Anterior/Posterior direction (A/P group, n=20), a group that performed the same practice in the medio/lateral direction (M/L group, n=20) and a control group (n=16). Pre- and post-training, participants were tested in a moving obstacle avoidance task. As a result of practice, postural response onset shifted closer to the time of collision with the obstacle. Side-to-side WS resulted in a reduction of the M/L sway amplitude and an increase of the trunk's velocity during avoidance. It is concluded that visually guided WS practice enhances elderly's ability for on-line visuo-motor processing when avoiding collision eliminating reliance on anticipatory scaling. Specifying the direction of WS seems to be critical for optimizing the transfer of training adaptations.
Is goal-directed attentional guidance just intertrial priming? A review.
Lamy, Dominique F; Kristjánsson, Arni
2013-07-01
According to most models of selective visual attention, our goals at any given moment and saliency in the visual field determine attentional priority. But selection is not carried out in isolation--we typically track objects through space and time. This is not well captured within the distinction between goal-directed and saliency-based attentional guidance. Recent studies have shown that selection is strongly facilitated when the characteristics of the objects to be attended and of those to be ignored remain constant between consecutive selections. These studies have generated the proposal that goal-directed or top-down effects are best understood as intertrial priming effects. Here, we provide a detailed overview and critical appraisal of the arguments, experimental strategies, and findings that have been used to promote this idea, along with a review of studies providing potential counterarguments. We divide this review according to different types of attentional control settings that observers are thought to adopt during visual search: feature-based settings, dimension-based settings, and singleton detection mode. We conclude that priming accounts for considerable portions of effects attributed to top-down guidance, but that top-down guidance can be independent of intertrial priming.
Keshner, E.A.; Dhaher, Y.
2008-01-01
Multiplanar environmental motion could generate head instability, particularly if the visual surround moves in planes orthogonal to a physical disturbance. We combined sagittal plane surface translations with visual field disturbances in 12 healthy (29–31 years) and 3 visually sensitive (27–57 years) adults. Center of pressure (COP), peak head angles, and RMS values of head motion were calculated and a 3-dimensional model of joint motion11 was developed to examine gross head motion in 3 planes. We found that subjects standing quietly in front of a visual scene translating in the sagittal plane produced significantly greater (p<0.003) head motion in yaw than when on a translating platform. However, when the platform was translated in the dark or with a visual scene rotating in roll, head motion orthogonal to the plane of platform motion significantly increased (p<0.02). Visually sensitive subjects having no history of vestibular disorder produced large, delayed compensatory head motion. Orthogonal head motions were significantly greater in visually sensitive than in healthy subjects in the dark (p<0.05) and with a stationary scene (p<0.01). We concluded that motion of the visual field can modify compensatory response kinematics of a freely moving head in planes orthogonal to the direction of a physical perturbation. These results suggest that the mechanisms controlling head orientation in space are distinct from those that control trunk orientation in space. These behaviors would have been missed if only COP data were considered. Data suggest that rehabilitation training can be enhanced by combining visual and mechanical perturbation paradigms. PMID:18162402
Horlin, Chiara; Black, Melissa; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
This systematic review examines the proficiency and visual search strategies of individuals with autism spectrum disorders (ASD) while disembedding figures and whether they differ from typical controls and other comparative samples. Five databases, including Proquest, Psychinfo, Medline, CINAHL and Science Direct were used to identify published studies meeting the inclusion and exclusion criteria. Twenty articles were included in the review, the majority of which matched participants by mental age. Outcomes discussed were time taken to identify targets, the number correctly identified, and fixation frequency and duration. Individuals with ASD perform at the same speed or faster than controls and other clinical samples. However, there appear to be no differences between individuals with ASD and controls for number of correctly identified targets. Only one study examined visual search strategies and suggests that individuals with ASD exhibit shorter first and final fixations to targets compared with controls.
Contextual cueing impairment in patients with age-related macular degeneration.
Geringswald, Franziska; Herbik, Anne; Hoffmann, Michael B; Pollmann, Stefan
2013-09-12
Visual attention can be guided by past experience of regularities in our visual environment. In the contextual cueing paradigm, incidental learning of repeated distractor configurations speeds up search times compared to random search arrays. Concomitantly, fewer fixations and more direct scan paths indicate more efficient visual exploration in repeated search arrays. In previous work, we found that simulating a central scotoma in healthy observers eliminated this search facilitation. Here, we investigated contextual cueing in patients with age-related macular degeneration (AMD) who suffer from impaired foveal vision. AMD patients performed visual search using only their more severely impaired eye (n = 13) as well as under binocular viewing (n = 16). Normal-sighted controls developed a significant contextual cueing effect. In comparison, patients showed only a small nonsignificant advantage for repeated displays when searching with their worse eye. When searching binocularly, they profited from contextual cues, but still less than controls. Number of fixations and scan pattern ratios showed a comparable pattern as search times. Moreover, contextual cueing was significantly correlated with acuity in monocular search. Thus, foveal vision loss may lead to impaired guidance of attention by contextual memory cues.
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Kaiser, Mary K.
2003-01-01
Perspective synthetic displays that supplement, or supplant, the optical windows traditionally used for guidance and control of aircraft are accompanied by potentially significant human factors problems related to the optical geometric conformality of the display. Such geometric conformality is broken when optical features are not in the location they would be if directly viewed through a window. This often occurs when the scene is relayed or generated from a location different from the pilot s eyepoint. However, assuming no large visual/vestibular effects, a pilot cad often learn to use such a display very effectively. Important problems may arise, however, when display accuracy or consistency is compromised, and this can usually be related to geometrical discrepancies between how the synthetic visual scene behaves and how the visual scene through a window behaves. In addition to these issues, this paper examines the potentially critical problem of the disorientation that can arise when both a synthetic display and a real window are present in a flight deck, and no consistent visual interpretation is available.
Bending it like Beckham: how to visually fool the goalkeeper.
Dessing, Joost C; Craig, Cathy M
2010-10-06
As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer.
Bending It Like Beckham: How to Visually Fool the Goalkeeper
2010-01-01
Background As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Methodology/Principal Findings Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. Conclusions While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer. PMID:20949130
Expansion of visual space during optokinetic afternystagmus (OKAN).
Kaminiarz, André; Krekelberg, Bart; Bremmer, Frank
2008-05-01
The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.
Virtual-reality-Based 3D navigation training for emergency egress from spacecraft.
Aoki, Hirofumi; Oman, Charles M; Natapoff, Alan
2007-08-01
Astronauts have reported spatial disorientation and navigation problems inside spacecraft whose interior visual vertical direction varies from module to module. If they had relevant preflight practice they might orient better. This experiment examined the influence of relative body orientation and individual spatial skills during VR training on a simulated emergency egress task. During training, 36 subjects were each led on 12 tours through a space station by a virtual tour guide. Subjects wore a head-mounted display and controlled their motion with a game-pad. Each tour traversed multiple modules and involved up to three changes in visual vertical direction. Each subject was assigned to one of three groups that maintained different postures: visually upright relative to the "local" module; constant orientation relative to the "station" irrespective of local visual vertical; and "mixed" (local, followed by station orientation). Groups were balanced on the basis of mental rotation and perspective-taking test scores. Subjects then performed 24 emergency egress testing trials without the tour guide. Smoke reduced visibility during the last 12 trials. Egress time, sense of direction (by pointing to origin and destination) and configuration knowledge were measured. Both individual 3D spatial abilities and orientation during training influence emergency egress performance, pointing, and configuration knowledge. Local training facilitates landmark and route learning, but station training enhances sense of direction relative to station, and, therefore, performance in low visibility. We recommend a sequence of local, followed by station, and then randomized orientation training, preferably customized to a trainee's 3D spatial ability.
Visual discrimination training improves Humphrey perimetry in chronic cortically induced blindness.
Cavanaugh, Matthew R; Huxlin, Krystel R
2017-05-09
To assess if visual discrimination training improves performance on visual perimetry tests in chronic stroke patients with visual cortex involvement. 24-2 and 10-2 Humphrey visual fields were analyzed for 17 chronic cortically blind stroke patients prior to and following visual discrimination training, as well as in 5 untrained, cortically blind controls. Trained patients practiced direction discrimination, orientation discrimination, or both, at nonoverlapping, blind field locations. All pretraining and posttraining discrimination performance and Humphrey fields were collected with online eye tracking, ensuring gaze-contingent stimulus presentation. Trained patients recovered ∼108 degrees 2 of vision on average, while untrained patients spontaneously improved over an area of ∼16 degrees 2 . Improvement was not affected by patient age, time since lesion, size of initial deficit, or training type, but was proportional to the amount of training performed. Untrained patients counterbalanced their improvements with worsening of sensitivity over ∼9 degrees 2 of their visual field. Worsening was minimal in trained patients. Finally, although discrimination performance improved at all trained locations, changes in Humphrey sensitivity occurred both within trained regions and beyond, extending over a larger area along the blind field border. In adults with chronic cortical visual impairment, the blind field border appears to have enhanced plastic potential, which can be recruited by gaze-controlled visual discrimination training to expand the visible field. Our findings underscore a critical need for future studies to measure the effects of vision restoration approaches on perimetry in larger cohorts of patients. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.
Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.
Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne
2016-05-01
We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.
Hollingworth, Andrew; Richard, Ashleigh M; Luck, Steven J
2008-02-01
Visual short-term memory (VSTM) has received intensive study over the past decade, with research focused on VSTM capacity and representational format. Yet, the function of VSTM in human cognition is not well understood. Here, the authors demonstrate that VSTM plays an important role in the control of saccadic eye movements. Intelligent human behavior depends on directing the eyes to goal-relevant objects in the world, yet saccades are very often inaccurate and require correction. The authors hypothesized that VSTM is used to remember the features of the current saccade target so that it can be rapidly reacquired after an errant saccade, a task faced by the visual system thousands of times each day. In 4 experiments, memory-based gaze correction was accurate, fast, automatic, and largely unconscious. In addition, a concurrent VSTM load interfered with memory-based gaze correction, but a verbal short-term memory load did not. These findings demonstrate that VSTM plays a direct role in a fundamentally important aspect of visually guided behavior, and they suggest the existence of previously unknown links between VSTM representations and the occulomotor system. PsycINFO Database Record (c) 2008 APA, all rights reserved.
Dalton, Brian H; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien
2017-04-15
We tested perceived head-on-feet orientation and the direction of vestibular-evoked balance responses in passively and actively held head-turned postures. The direction of vestibular-evoked balance responses was not aligned with perceived head-on-feet orientation while maintaining prolonged passively held head-turned postures. Furthermore, static visual cues of head-on-feet orientation did not update the estimate of head posture for the balance controller. A prolonged actively held head-turned posture did not elicit a rotation in the direction of the vestibular-evoked balance response despite a significant rotation in perceived angular head posture. It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head-on-feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head-turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole-body balance responses. Visual recalibration of head-on-feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular-evoked balance response was not orthogonal to perceived head-on-feet orientation, regardless of the visual information provided. For prolonged head-turned postures, balance responses consistent with actual head-on-feet posture occurred only during the active condition. Our results indicate that conscious perception of head-on-feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head-on-feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head-on-feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Alber, Raimund; Moser, Hermann; Gall, Carolin; Sabel, Bernhard A
2017-08-01
Visual field defects after posterior cerebral artery stroke can be improved by vision restoration training (VRT), but when combined with transcranial direct current stimulation (tDCS), which alters brain excitability, vision recovery can be potentiated in the chronic stage. To date, the combination of VRT and tDCS has not been evaluated in postacute stroke rehabilitation. To determine whether combined tDCS and VRT can be effectively implemented in the early recovery phase following stroke, and to explore the feasibility, safety and efficacy of an early intervention. Open-label pilot study including a case series of 7 tDCS/VRT versus a convenience sample of 7 control patients (ClinicalTrials.gov ID: NCT02935413). Rehabilitation center. Patients with homonymous visual field defects following a posterior cerebral artery stroke. Seven homonymous hemianopia patients were prospectively treated with 10 sessions of combined tDCS (2.mA, 10 daily sessions of 20 minutes) and VRT at 66 (±50) days on average poststroke. Visual field recovery was compared with the retrospective data of 7 controls, whose defect sizes and age of lesions were matched to those of the experimental subjects and who had received standard rehabilitation with compensatory eye movement and exploration training. All 7 patients in the treatment group completed the treatment protocol. The safety and acceptance were excellent, and patients reported occasional skin itching beneath the electrodes as the only minor side effect. Irrespective of their treatment, both groups (treatment and control) showed improved visual fields as documented by an increased mean sensitivity threshold in decibels in standard static perimetry. Recovery was significantly greater (P < .05) in the tDCS/VRT patients (36.73% ± 37.0%) than in the controls (10.74% ± 8.86%). In this open-label pilot study, tDCS/VRT in subacute stroke was demonstrated to be safe, with excellent applicability and acceptance of the treatment. Preliminary effectiveness calculations show that tDCS/VRT may be superior to standard vision training procedures. A confirmatory, larger-sample, controlled, randomized, and double-blind trial is now underway to compare real-tDCS- versus sham-tDCS-supported visual field training in the early vision rehabilitation phase. IV. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
[Evaluation of visual functions in elderly patients with femoral neck fracture].
Oner, Mithat; Oner, Ayşe; Güney, Ahmet; Halici, Mehmet; Arda, Hatice; Bilal, Okkeş
2009-01-01
We aimed at assessing the visual functions in elderly patients with femoral neck fracture and to compare the results with age-matched controls in this three-year prospective study. Seventy-one patients with a history of fall related hip fracture (39 females, 32 males; mean age 76.3+/-9.7 years; range 64 to 90 years) and who were diagnosed with femoral neck fracture after direct graphy were treated by means of bipolar partial prosthesis and they were contacted postoperatively or prior to discharge to participate in the study. Visual acuity, depth perception, the presence of cataract in the red reflex were evaluated. A dilated fundus and slit-lamp examination were performed if possible. On completion of the examination, the ophthalmologist documented the causes of any visual impairment found. Control group was comprised of age-matched 40 subjects (22 females, 18 males; mean age 73.2+/-7.6 years; range 62 to 90 years) who applied to ophtalmology clinic for routine examination. The visual acuity was significantly decreased in the patient group as was stereopsis (p<0.05). We found no difference between the study group and the controls when we evaluate the distribution of self reported eye disease and eye disease found on ocular examination. The rate of cases who reported not usually wearing glasses was 35% while it was 5% in the control group. When we evaluate the time since last examination, 38% of cases had not had an eye examination for over four years, as compared with 22.5% of controls. This study shows that elderly people should have their eyes tested at least once every two years, refractive errors should be corrected and eye diseases should be treated to decrease the risk of fall-related femoral neck fractures.
Temporal processing dysfunction in schizophrenia.
Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P
2008-07-01
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.
Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design
NASA Technical Reports Server (NTRS)
Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.
How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization
Kress, Daniel; van Bokhorst, Evelien; Lentink, David
2015-01-01
Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413
Aberrant Pattern of Scanning in Prosopagnosia Reflects Impaired Face Processing
ERIC Educational Resources Information Center
Stephan, Blossom Christa Maree; Caine, Diana
2009-01-01
Visual scanpath recording was used to investigate the information processing strategies used by a prosopagnosic patient, SC, when viewing faces. Compared to controls, SC showed an aberrant pattern of scanning, directing attention away from the internal configuration of facial features (eyes, nose) towards peripheral regions (hair, forehead) of the…
ERIC Educational Resources Information Center
Langstaff, Nancy
This book, intended for use by inservice teachers, preservice teachers, and parents interested in open classrooms, contains three chapters. "Beginning Reading in an Open Classroom" discusses language development, sight vocabulary, visual discrimination, auditory discrimination, directional concepts, small muscle control, and measurement of…
Memory Consolidation and Gene Expression in "Periplaneta Americana"
ERIC Educational Resources Information Center
Strausfeld, Nicholas J.; Pinter, Marianna; Lent, David D.
2005-01-01
A unique behavioral paradigm has been developed for "Periplaneta americana" that assesses the timing and success of memory consolidation leading to long-term memory of visual-olfactory associations. The brains of trained and control animals, removed at the critical consolidation period, were screened by two-directional suppression subtractive…
1983-12-01
PROPOSED SOLUTIONS Many papers have been published outlining alternative methods of thermally controlling microelectronic devices. Hannemann [3] describes...Workshop, NSF Grant ENG-7701297, Directions of Heat Transfer in Electronic Equipment, Fy R. C. Chu, 1977. 3. Hannemann , R., "Electronic System Thermal
Interactions between dorsal and ventral streams for controlling skilled grasp
van Polanen, Vonne; Davare, Marco
2015-01-01
The two visual systems hypothesis suggests processing of visual information into two distinct routes in the brain: a dorsal stream for the control of actions and a ventral stream for the identification of objects. Recently, increasing evidence has shown that the dorsal and ventral streams are not strictly independent, but do interact with each other. In this paper, we argue that the interactions between dorsal and ventral streams are important for controlling complex object-oriented hand movements, especially skilled grasp. Anatomical studies have reported the existence of direct connections between dorsal and ventral stream areas. These physiological interconnections appear to be gradually more active as the precision demands of the grasp become higher. It is hypothesised that the dorsal stream needs to retrieve detailed information about object identity, stored in ventral stream areas, when the object properties require complex fine-tuning of the grasp. In turn, the ventral stream might receive up to date grasp-related information from dorsal stream areas to refine the object internal representation. Future research will provide direct evidence for which specific areas of the two streams interact, the timing of their interactions and in which behavioural context they occur. PMID:26169317
Evidence for auditory-visual processing specific to biological motion.
Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F
2012-01-01
Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.
de Rengervé, Antoine; Andry, Pierre; Gaussier, Philippe
2015-04-01
Imitation and learning from humans require an adequate sensorimotor controller to learn and encode behaviors. We present the Dynamic Muscle Perception-Action(DM-PerAc) model to control a multiple degrees-of-freedom (DOF) robot arm. In the original PerAc model, path-following or place-reaching behaviors correspond to the sensorimotor attractors resulting from the dynamics of learned sensorimotor associations. The DM-PerAc model, inspired by human muscles, permits one to combine impedance-like control with the capability of learning sensorimotor attraction basins. We detail a solution to learn incrementally online the DM-PerAc visuomotor controller. Postural attractors are learned by adapting the muscle activations in the model depending on movement errors. Visuomotor categories merging visual and proprioceptive signals are associated with these muscle activations. Thus, the visual and proprioceptive signals activate the motor action generating an attractor which satisfies both visual and proprioceptive constraints. This visuomotor controller can serve as a basis for imitative behaviors. In addition, the muscle activation patterns can define directions of movement instead of postural attractors. Such patterns can be used in state-action couples to generate trajectories like in the PerAc model. We discuss a possible extension of the DM-PerAc controller by adapting the Fukuyori's controller based on the Langevin's equation. This controller can serve not only to reach attractors which were not explicitly learned, but also to learn the state/action couples to define trajectories.
Li, Siyao; Cai, Ying; Liu, Jing; Li, Dawei; Feng, Zifang; Chen, Chuansheng; Xue, Gui
2017-04-01
Mounting evidence suggests that multiple mechanisms underlie working memory capacity. Using transcranial direct current stimulation (tDCS), the current study aimed to provide causal evidence for the neural dissociation of two mechanisms underlying visual working memory (WM) capacity, namely, the scope and control of attention. A change detection task with distractors was used, where a number of colored bars (i.e., two red bars, four red bars, or two red plus two blue bars) were presented on both sides (Experiment 1) or the center (Experiment 2) of the screen for 100ms, and participants were instructed to remember the red bars and to ignore the blue bars (in both Experiments), as well as to ignore the stimuli on the un-cued side (Experiment 1 only). In both experiments, participants finished three sessions of the task after 15min of 1.5mA anodal tDCS administered on the right prefrontal cortex (PFC), the right posterior parietal cortex (PPC), and the primary visual cortex (VC), respectively. The VC stimulation served as an active control condition. We found that compared to stimulation on the VC, stimulation on the right PPC specifically increased the visual WM capacity under the no-distractor condition (i.e., 4 red bars), whereas stimulation on the right PFC specifically increased the visual WM capacity under the distractor condition (i.e., 2 red bars plus 2 blue bars). These results suggest that the PPC and PFC are involved in the scope and control of attention, respectively. We further showed that compared to central presentation of the stimuli (Experiment 2), bilateral presentation of the stimuli (on both sides of the fixation in Experiment 1) led to an additional demand for attention control. Our results emphasize the dissociated roles of the frontal and parietal lobes in visual WM capacity, and provide a deeper understanding of the neural mechanisms of WM. Copyright © 2017 Elsevier Inc. All rights reserved.
On the nature of unintentional action: a study of force/moment drifts during multifinger tasks.
Parsa, Behnoosh; O'Shea, Daniel J; Zatsiorsky, Vladimir M; Latash, Mark L
2016-08-01
We explored the origins of unintentional changes in performance during accurate force production in isometric conditions seen after turning visual feedback off. The idea of control with referent spatial coordinates suggests that these phenomena could result from drifts of the referent coordinate for the effector. Subjects performed accurate force/moment production tasks by pressing with the fingers of a hand on force sensors. Turning the visual feedback off resulted in slow drifts of both total force and total moment to lower magnitudes of these variables; these drifts were more pronounced in the right hand of the right-handed subjects. Drifts in individual finger forces could be in different direction; in particular, fingers that produced moments of force against the required total moment showed an increase in their forces. The force/moment drift was associated with a drop in the index of synergy stabilizing performance under visual feedback. The drifts in directions that changed performance (non-motor equivalent) and in directions that did not (motor equivalent) were of about the same magnitude. The results suggest that control with referent coordinates is associated with drifts of those referent coordinates toward the corresponding actual coordinates of the hand, a reflection of the natural tendency of physical systems to move toward a minimum of potential energy. The interaction between drifts of the hand referent coordinate and referent orientation leads to counterdirectional drifts in individual finger forces. The results also demonstrate that the sensory information used to create multifinger synergies is necessary for their presence over the task duration. Copyright © 2016 the American Physiological Society.
Acetylcholine contributes to the integration of self-movement cues in head direction cells.
Yoder, Ryan M; Chan, Jeremy H M; Taube, Jeffrey S
2017-08-01
Acetylcholine contributes to accurate performance on some navigational tasks, but details of its contribution to the underlying brain signals are not fully understood. The medial septal area provides widespread cholinergic input to various brain regions, but selective damage to medial septal cholinergic neurons generally has little effect on landmark-based navigation, or the underlying neural representations of location and directional heading in visual environments. In contrast, the loss of medial septal cholinergic neurons disrupts navigation based on path integration, but no studies have tested whether these path integration deficits are associated with disrupted head direction (HD) cell activity. Therefore, we evaluated HD cell responses to visual cue rotations in a familiar arena, and during navigation between familiar and novel arenas, after muscarinic receptor blockade with systemic atropine. Atropine treatment reduced the peak firing rate of HD cells, but failed to significantly affect other HD cell firing properties. Atropine also failed to significantly disrupt the dominant landmark control of the HD signal, even though we used a procedure that challenged this landmark control. In contrast, atropine disrupted HD cell stability during navigation between familiar and novel arenas, where path integration normally maintains a consistent HD cell signal across arenas. These results suggest that acetylcholine contributes to path integration, in part, by facilitating the use of idiothetic cues to maintain a consistent representation of directional heading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Infantile nystagmus syndrome is associated with inefficiency of goal-directed hand movements.
Liebrand-Schurink, Joyce; Cox, Ralf F A; van Rens, Ger H M B; Cillessen, Antonius H N; Meulenbroek, Ruud G J; Boonstra, F Nienke
2014-12-23
The effect of infantile nystagmus syndrome (INS) on the efficiency of goal-directed hand movements was examined. We recruited 37 children with INS and 65 control subjects with normal vision, aged 4 to 8 years. Participants performed horizontally-oriented, goal-directed cylinder displacements as if they displaced a low-vision aid. The first 10 movements of 20 back-and-forth displacements in a trial were performed between two visually presented target areas, and the second 10 between remembered target locations (not visible). Motor performance was examined in terms of movement time, endpoint accuracy, and a harmonicity index reflecting energetic efficiency. Compared to the control group, the children with INS performed the cylinder displacements more slowly (using more time), less accurately (specifically in small-amplitude movements), and with less harmonic acceleration profiles. Their poor visual acuity proved to correlate with slower and less accurate movements, but did not correlate with harmonicity. When moving between remembered target locations, the performance of children with INS was less accurate than that of the children with normal vision. In both groups, movement speed and harmonicity increased with age to a similar extent. Collectively, the findings suggest that, in addition to the visuospatial homing-in problems associated with the syndrome, INS is associated with inefficiency of goal-directed hand movements. ( http://www.trialregister.nl number, NTR2380.). Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.
Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne
2015-09-21
Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.
Noise Source Visualization Using a Digital Voice Recorder and Low-Cost Sensors
Cho, Yong Thung
2018-01-01
Accurate sound visualization of noise sources is required for optimal noise control. Typically, noise measurement systems require microphones, an analog-digital converter, cables, a data acquisition system, etc., which may not be affordable for potential users. Also, many such systems are not highly portable and may not be convenient for travel. Handheld personal electronic devices such as smartphones and digital voice recorders with relatively lower costs and higher performance have become widely available recently. Even though such devices are highly portable, directly implementing them for noise measurement may lead to erroneous results since such equipment was originally designed for voice recording. In this study, external microphones were connected to a digital voice recorder to conduct measurements and the input received was processed for noise visualization. In this way, a low cost, compact sound visualization system was designed and introduced to visualize two actual noise sources for verification with different characteristics: an enclosed loud speaker and a small air compressor. Reasonable accuracy of noise visualization for these two sources was shown over a relatively wide frequency range. This very affordable and compact sound visualization system can be used for many actual noise visualization applications in addition to educational purposes. PMID:29614038
Neural mechanisms of limb position estimation in the primate brain.
Shi, Ying; Buneo, Christopher A
2011-01-01
Understanding the neural mechanisms of limb position estimation is important both for comprehending the neural control of goal directed arm movements and for developing neuroprosthetic systems designed to replace lost limb function. Here we examined the role of area 5 of the posterior parietal cortex in estimating limb position based on visual and somatic (proprioceptive, efference copy) signals. Single unit recordings were obtained as monkeys reached to visual targets presented in a semi-immersive virtual reality environment. On half of the trials animals were required to maintain their limb position at these targets while receiving both visual and non-visual feedback of their arm position, while on the other trials visual feedback was withheld. When examined individually, many area 5 neurons were tuned to the position of the limb in the workspace but very few neurons modulated their firing rates based on the presence/absence of visual feedback. At the population level however decoding of limb position was somewhat more accurate when visual feedback was provided. These findings support a role for area 5 in limb position estimation but also suggest that visual signals regarding limb position are only weakly represented in this area, and only at the population level.
Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2014-07-01
Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.
Experimental characterization of wingtip vortices in the near field using smoke flow visualizations
NASA Astrophysics Data System (ADS)
Serrano-Aguilera, J. J.; García-Ortiz, J. Hermenegildo; Gallardo-Claros, A.; Parras, L.; del Pino, C.
2016-08-01
In order to predict the axial development of the wingtip vortices strength, an accurate theoretical model is required. Several experimental techniques have been used to that end, e.g. PIV or hot-wire anemometry, but they imply a significant cost and effort. For this reason, we have performed experiments using the smoke-wire technique to visualize smoke streaks in six planes perpendicular to the main stream flow direction. Using this visualization technique, we obtained quantitative information regarding the vortex velocity field by means of Batchelor's model for two chord-based Reynolds numbers, Re_c=3.33× 10^4 and 10^5. Therefore, this theoretical vortex model has been introduced in the integration of ordinary differential equations which describe the temporal evolution of streak lines as function of two parameters: the swirl number, S, and the virtual axial origin, overline{z_0}. We have applied two different procedures to minimize the distance between experimental and theoretical flow patterns: individual curve fitting at six different control planes in the streamwise direction and the global curve fitting which corresponds to all the control planes simultaneously. Both sets of results have been compared with those provided by del Pino et al. (Phys Fluids 23(013):602, 2011b. doi: 10.1063/1.3537791), finding good agreement. Finally, we have observed a weak influence of the Reynolds number on the values S and overline{z_0} at low-to-moderate Re_c. This experimental technique is proposed as a low cost alternative to characterize wingtip vortices based on flow visualizations.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Decoding facial blends of emotion: visual field, attentional and hemispheric biases.
Ross, Elliott D; Shayya, Luay; Champlain, Amanda; Monnot, Marilee; Prodan, Calin I
2013-12-01
Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced. Published by Elsevier Inc.
Patterns of interhemispheric correlation during human communication.
Grinberg-Zylberbaum, J; Ramos, J
1987-09-01
Correlation patterns between the electroencephalographic activity of both hemispheres in adult subjects were obtained. The morphology of these patterns for one subject was compared with another subject's patterns during control situations without communication, and during sessions in which direct communication was stimulated. Neither verbalization nor visual or physical contact are necessary for direct communication to occur. The interhemispheric correlation patterns for each subject were observed to become similar during the communication sessions as compared to the control situations. These effects are not due to nonspecific factors such as habituation or fatigue. The results support the syntergic theory proposed by one of the authors (Grinberg-Zylberbaum).
Goyret, Joaquín; Kelber, Almut
2012-01-01
Most visual systems are more sensitive to luminance than to colour signals. Animals resolve finer spatial detail and temporal changes through achromatic signals than through chromatic ones. Probably, this explains that detection of small, distant, or moving objects is typically mediated through achromatic signals. Macroglossum stellatarum are fast flying nectarivorous hawkmoths that inspect flowers with their long proboscis while hovering. They can visually control this behaviour using floral markings known as nectar guides. Here, we investigate whether this is mediated by chromatic or achromatic cues. We evaluated proboscis placement, foraging efficiency, and inspection learning of naïve moths foraging on flower models with coloured markings that offered either chromatic, achromatic or both contrasts. Hummingbird hawkmoths could use either achromatic or chromatic signals to inspect models while hovering. We identified three, apparently independent, components controlling proboscis placement: After initial contact, 1) moths directed their probing towards the yellow colour irrespectively of luminance signals, suggesting a dominant role of chromatic signals; and 2) moths tended to probe mainly on the brighter areas of models that offered only achromatic signals. 3) During the establishment of the first contact, naïve moths showed a tendency to direct their proboscis towards the small floral marks independent of their colour or luminance. Moths learned to find nectar faster, but their foraging efficiency depended on the flower model they foraged on. Our results imply that M. stellatarum can perceive small patterns through colour vision. We discuss how the different informational contents of chromatic and luminance signals can be significant for the control of flower inspection, and visually guided behaviours in general.
Evaluation of Postural Control in Patients with Glaucoma Using a Virtual Reality Environment.
Diniz-Filho, Alberto; Boer, Erwin R; Gracitelli, Carolina P B; Abe, Ricardo Y; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A
2015-06-01
To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in patients with glaucoma. Cross-sectional study. The study involved 42 patients with glaucoma with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Torque moments around the center of foot pressure on the force platform were measured, and the standard deviations of the torque moments (STD) were calculated as a measurement of postural stability and reported in Newton meters (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Patients with glaucoma had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) and rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared with those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with a history of falls in patients with glaucoma (incidence rate ratio, 1.85; 95% confidence interval, 1.30-2.63; P = 0.001). The study presented and validated a novel paradigm for evaluation of balance control in patients with glaucoma on the basis of the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with a history of falls and may help to provide a better understanding of balance control in patients with glaucoma. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Evaluation of Postural Control in Glaucoma Patients Using a Virtual 1 Reality Environment
Diniz-Filho, Alberto; Boer, Erwin R.; Gracitelli, Carolina P. B.; Abe, Ricardo Y.; van Driel, Nienke; Yang, Zhiyong; Medeiros, Felipe A.
2015-01-01
Purpose To evaluate postural control using a dynamic virtual reality environment and the relationship between postural metrics and history of falls in glaucoma patients. Design Cross-sectional study. Participants The study involved 42 glaucoma patients with repeatable visual field defects on standard automated perimetry (SAP) and 38 control healthy subjects. Methods Patients underwent evaluation of postural stability by a force platform during presentation of static and dynamic visual stimuli on stereoscopic head-mounted goggles. The dynamic visual stimuli presented rotational and translational ecologically valid peripheral background perturbations. Postural stability was also tested in a completely dark field to assess somatosensory and vestibular contributions to postural control. History of falls was evaluated by a standard questionnaire. Main Outcome Measures Torque moments around the center of foot pressure on the force platform were measured and the standard deviations (STD) of these torque moments were calculated as a measurement of postural stability and reported in Newton meter (Nm). The association with history of falls was investigated using Poisson regression models. Age, gender, body mass index, severity of visual field defect, best-corrected visual acuity, and STD on dark field condition were included as confounding factors. Results Glaucoma patients had larger overall STD than controls during both translational (5.12 ± 2.39 Nm vs. 3.85 ± 1.82 Nm, respectively; P = 0.005) as well as rotational stimuli (5.60 ± 3.82 Nm vs. 3.93 ± 2.07 Nm, respectively; P = 0.022). Postural metrics obtained during dynamic visual stimuli performed better in explaining history of falls compared to those obtained in static and dark field condition. In the multivariable model, STD values in the mediolateral direction during translational stimulus were significantly associated with history of falls in glaucoma patients (incidence-rate ratio = 1.85; 95% CI: 1.30 – 2.63; P = 0.001). Conclusions The study presented and validated a novel paradigm for evaluation of balance control in glaucoma patients based on the assessment of postural reactivity to dynamic visual stimuli using a virtual reality environment. The newly developed metrics were associated with history of falls and may help to provide a better understanding of balance control in glaucoma patients. PMID:25892017
Fan, Zhao; Harris, John
2010-10-12
In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.
Global motion perception deficits in autism are reflected as early as primary visual cortex
Thomas, Cibu; Kravitz, Dwight J.; Wallace, Gregory L.; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I.
2014-01-01
Individuals with autism are often characterized as ‘seeing the trees, but not the forest’—attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15–27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. PMID:25060095
Steering microtubule shuttle transport with dynamically controlled magnetic fields
Mahajan, K. D.; Ruan, G.; Dorcéna, C. J.; ...
2016-03-23
Nanoscale control of matter is critical to the design of integrated nanosystems. Here, we describe a method to dynamically control directionality of microtubule (MT) motion using programmable magnetic fields. MTs are combined with magnetic quantum dots (i.e., MagDots) that are manipulated by external magnetic fields provided by magnetic nanowires. MT shuttles thus undergo both ATP-driven and externally-directed motion with a fluorescence component that permits simultaneous visualization of shuttle motion. This technology is used to alter the trajectory of MTs in motion and to pin MT motion. Ultimately, such an approach could be used to evaluate the MT-kinesin transport system andmore » could serve as the basis for improved lab-on-a-chip technologies based on MT transport.« less
Luedtke, Kerstin; Rushton, Alison; Wright, Christine; Jürgens, Tim; Polzer, Astrid; Mueller, Gerd; May, Arne
2015-04-16
To evaluate the effectiveness of transcranial direct current stimulation alone and in combination with cognitive behavioural management in patients with non-specific chronic low back pain. Double blind parallel group randomised controlled trial with six months' follow-up conducted May 2011-March 2013. Participants, physiotherapists, assessors, and analyses were blinded to group allocation. Interdisciplinary chronic pain centre. 135 participants with non-specific chronic low back pain >12 weeks were recruited from 225 patients assessed for eligibility. Participants were randomised to receive anodal (20 minutes to motor cortex at 2 mA) or sham transcranial direct current stimulation (identical electrode position, stimulator switched off after 30 seconds) for five consecutive days immediately before cognitive behavioural management (four week multidisciplinary programme of 80 hours). Two primary outcome measures of pain intensity (0-100 visual analogue scale) and disability (Oswestry disability index) were evaluated at two primary endpoints after stimulation and after cognitive behavioural management. Analyses of covariance with baseline values (pain or disability) as covariates showed that transcranial direct current stimulation was ineffective for the reduction of pain (difference between groups on visual analogue scale 1 mm (99% confidence interval -8.69 mm to 6.3 mm; P=0.68)) and disability (difference between groups 1 point (-1.73 to 1.98; P=0.86)) and did not influence the outcome of cognitive behavioural management (difference between group 3 mm (-10.32 mm to 6.73 mm); P=0.58; difference between groups on Oswestry disability index 0 point (-2.45 to 2.62); P=0.92). The stimulation was well tolerated with minimal transitory side effects. This results of this trial on the effectiveness of transcranial direct current stimulation for the reduction of pain and disability do not support its clinical use for managing non-specific chronic low back pain.Trial registration Current controlled trials ISRCTN89874874. © Luedtke et al 2015.
Visual direction finding by fishes
NASA Technical Reports Server (NTRS)
Waterman, T. H.
1972-01-01
The use of visual orientation, in the absence of landmarks, for underwater direction finding exercises by fishes is reviewed. Celestial directional clues observed directly near the water surface or indirectly at an asymptatic depth are suggested as possible orientation aids.
Evaluating Middle School Students' Spatial-scientific Performance in Earth-space Science
NASA Astrophysics Data System (ADS)
Wilhelm, Jennifer; Jackson, C.; Toland, M. D.; Cole, M.; Wilhelm, R. J.
2013-06-01
Many astronomical concepts cannot be understood without a developed understanding of four spatial-mathematics domains defined as follows: a) Geometric Spatial Visualization (GSV) - Visualizing the geometric features of a system as it appears above, below, and within the system’s plane; b) Spatial Projection (SP) - Projecting to a different location and visualizing from that global perspective; c) Cardinal Directions (CD) - Distinguishing directions (N, S, E, W) in order to document an object’s vector position in space; and d) Periodic Patterns - (PP) Recognizing occurrences at regular intervals of time and/or space. For this study, differences were examined between groups of sixth grade students’ spatial-scientific development pre/post implementation of an Earth/Space unit. Treatment teachers employed a NASA-based curriculum (Realistic Explorations in Astronomical Learning), while control teachers implemented their regular Earth/Space units. A 2-level hierarchical linear model was used to evaluate student performance on the Lunar Phases Concept Inventory (LPCI) and four spatial-mathematics domains, while controlling for two variables (gender and ethnicity) at the student level and one variable (teaching experience) at the teacher level. Overall LPCI results show pre-test scores predicted post-test scores, boys performed better than girls, and Whites performed better than non-Whites. We also compared experimental and control groups’ by spatial-mathematics domain outcomes. For GSV, it was found that boys, in general, tended to have higher GSV post-scores. For domains CD and SP, no statistically significant differences were observed. PP results show Whites performed better than non-Whites. Also for PP, a significant cross-level interaction term (gender-treatment) was observed, which means differences in control and experimental groups are dependent on students’ gender. These findings can be interpreted as: (a) the experimental girls scored higher than the control girls and/or (b) the control group displayed a gender gap in favor of boys while no gender gap was displayed within the experimental group.
Weyand, T G; Gafka, A C
2001-01-01
We studied the visuomotor activity of corticotectal (CT) cells in two visual cortical areas [area 17 and the posteromedial lateral suprasylvian cortex (PMLS)] of the cat. The cats were trained in simple oculomotor tasks, and head position was fixed. Most CT cells in both cortical areas gave a vigorous discharge to a small stimulus used to control gaze when it fell within the retinotopically defined visual field. However, the vigor of the visual response did not predict latency to initiate a saccade, saccade velocity, amplitude, or even if a saccade would be made, minimizing any potential role these cells might have in premotor or attentional processes. Most CT cells in both areas were selective for direction of stimulus motion, and cells in PMLS showed a direction preference favoring motion away from points of central gaze. CT cells did not discharge with eye movements in the dark. During eye movements in the light, many CT cells in area 17 increased their activity. In contrast, cells in PMLS, including CT cells, were generally unresponsive during saccades. Paradoxically, cells in PMLS responded vigorously to stimuli moving at saccadic velocities, indicating that the oculomotor system suppresses visual activity elicited by moving the retina across an illuminated scene. Nearly all CT cells showed oscillatory activity in the frequency range of 20-90 Hz, especially in response to visual stimuli. However, this activity was capricious; strong oscillations in one trial could disappear in the next despite identical stimulus conditions. Although the CT cells in both of these regions share many characteristics, the direction anisotropy and the suppression of activity during eye movements which characterize the neurons in PMLS suggests that these two areas have different roles in facilitating perceptual/motor processes at the level of the superior colliculus.
Brumberg, Jonathan S; Nguyen, Anh; Pitt, Kevin M; Lorenz, Sean D
2018-01-31
We investigated how overt visual attention and oculomotor control influence successful use of a visual feedback brain-computer interface (BCI) for accessing augmentative and alternative communication (AAC) devices in a heterogeneous population of individuals with profound neuromotor impairments. BCIs are often tested within a single patient population limiting generalization of results. This study focuses on examining individual sensory abilities with an eye toward possible interface adaptations to improve device performance. Five individuals with a range of neuromotor disorders participated in four-choice BCI control task involving the steady state visually evoked potential. The BCI graphical interface was designed to simulate a commercial AAC device to examine whether an integrated device could be used successfully by individuals with neuromotor impairment. All participants were able to interact with the BCI and highest performance was found for participants able to employ an overt visual attention strategy. For participants with visual deficits to due to impaired oculomotor control, effective performance increased after accounting for mismatches between the graphical layout and participant visual capabilities. As BCIs are translated from research environments to clinical applications, the assessment of BCI-related skills will help facilitate proper device selection and provide individuals who use BCI the greatest likelihood of immediate and long term communicative success. Overall, our results indicate that adaptations can be an effective strategy to reduce barriers and increase access to BCI technology. These efforts should be directed by comprehensive assessments for matching individuals to the most appropriate device to support their complex communication needs. Implications for Rehabilitation Brain computer interfaces using the steady state visually evoked potential can be integrated with an augmentative and alternative communication device to provide access to language and literacy for individuals with neuromotor impairment. Comprehensive assessments are needed to fully understand the sensory, motor, and cognitive abilities of individuals who may use brain-computer interfaces for proper feature matching as selection of the most appropriate device including optimization device layouts and control paradigms. Oculomotor impairments negatively impact brain-computer interfaces that use the steady state visually evoked potential, but modifications to place interface stimuli and communication items in the intact visual field can improve successful outcomes.
Arshad, Q; Siddiqui, S; Ramachandran, S; Goga, U; Bonsu, A; Patel, M; Roberts, R E; Nigmatullina, Y; Malhotra, P; Bronstein, A M
2015-12-17
Right hemisphere dominance for visuo-spatial attention is characteristically observed in most right-handed individuals. This dominance has been attributed to both an anatomically larger right fronto-parietal network and the existence of asymmetric parietal interhemispheric connections. Previously it has been demonstrated that interhemispheric conflict, which induces left hemisphere inhibition, results in the modulation of both (i) the excitability of the early visual cortex (V1) and (ii) the brainstem-mediated vestibular-ocular reflex (VOR) via top-down control mechanisms. However to date, it remains unknown whether the degree of an individual's right hemisphere dominance for visuospatial function can influence, (i) the baseline excitability of the visual cortex and (ii) the extent to which the right hemisphere can exert top-down modulation. We directly tested this by correlating line bisection error (or pseudoneglect), taken as a measure of right hemisphere dominance, with both (i) visual cortical excitability measured using phosphene perception elicited via single-pulse occipital trans-cranial magnetic stimulation (TMS) and (ii) the degree of trans-cranial direct current stimulation (tDCS)-mediated VOR suppression, following left hemisphere inhibition. We found that those individuals with greater right hemisphere dominance had a less excitable early visual cortex at baseline and demonstrated a greater degree of vestibular nystagmus suppression following left hemisphere cathodal tDCS. To conclude, our results provide the first demonstration that individual differences in right hemisphere dominance can directly predict both the baseline excitability of low-level brain structures and the degree of top-down modulation exerted over them. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Plow, Ela B; Obretenova, Souzana N; Halko, Mark A; Kenkel, Sigrid; Jackson, Mary Lou; Pascual-Leone, Alvaro; Merabet, Lotfi B
2011-09-01
To standardize a protocol for promoting visual rehabilitative outcomes in post-stroke hemianopia by combining occipital cortical transcranial direct current stimulation (tDCS) with Vision Restoration Therapy (VRT). A comparative case study assessing feasibility and safety. A controlled laboratory setting. Two patients, both with right hemianopia after occipital stroke damage. METHODS AND OUTCOME MEASUREMENTS: Both patients underwent an identical VRT protocol that lasted 3 months (30 minutes, twice a day, 3 days per week). In patient 1, anodal tDCS was delivered to the occipital cortex during VRT training, whereas in patient 2 sham tDCS with VRT was performed. The primary outcome, visual field border, was defined objectively by using high-resolution perimetry. Secondary outcomes included subjective characterization of visual deficit and functional surveys that assessed performance on activities of daily living. For patient 1, the neural correlates of visual recovery were also investigated, by using functional magnetic resonance imaging. Delivery of combined tDCS with VRT was feasible and safe. High-resolution perimetry revealed a greater shift in visual field border for patient 1 versus patient 2. Patient 1 also showed greater recovery of function in activities of daily living. Contrary to the expectation, patient 2 perceived greater subjective improvement in visual field despite objective high-resolution perimetry results that indicated otherwise. In patient 1, visual function recovery was associated with functional magnetic resonance imaging activity in surviving peri-lesional and bilateral higher-order visual areas. Results of preliminary case comparisons suggest that occipital cortical tDCS may enhance recovery of visual function associated with concurrent VRT through visual cortical reorganization. Future studies may benefit from incorporating protocol refinements such as those described here, which include global capture of function, control for potential confounds, and investigation of underlying neural substrates of recovery. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Parafoveal magnification: visual acuity does not modulate the perceptual span in reading.
Miellet, Sébastien; O'Donnell, Patrick J; Sereno, Sara C
2009-06-01
Models of eye guidance in reading rely on the concept of the perceptual span-the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm-parafoveal magnification (PM)-that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account of eye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.
Visual display aid for orbital maneuvering - Design considerations
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1993-01-01
This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.
Formation of visual memories controlled by gamma power phase-locked to alpha oscillations.
Park, Hyojin; Lee, Dong Soo; Kang, Eunjoo; Kang, Hyejin; Hahm, Jarang; Kim, June Sic; Chung, Chun Kee; Jiang, Haiteng; Gross, Joachim; Jensen, Ole
2016-06-16
Neuronal oscillations provide a window for understanding the brain dynamics that organize the flow of information from sensory to memory areas. While it has been suggested that gamma power reflects feedforward processing and alpha oscillations feedback control, it remains unknown how these oscillations dynamically interact. Magnetoencephalography (MEG) data was acquired from healthy subjects who were cued to either remember or not remember presented pictures. Our analysis revealed that in anticipation of a picture to be remembered, alpha power decreased while the cross-frequency coupling between gamma power and alpha phase increased. A measure of directionality between alpha phase and gamma power predicted individual ability to encode memory: stronger control of alpha phase over gamma power was associated with better memory. These findings demonstrate that encoding of visual information is reflected by a state determined by the interaction between alpha and gamma activity.
Formation of visual memories controlled by gamma power phase-locked to alpha oscillations
Park, Hyojin; Lee, Dong Soo; Kang, Eunjoo; Kang, Hyejin; Hahm, Jarang; Kim, June Sic; Chung, Chun Kee; Jiang, Haiteng; Gross, Joachim; Jensen, Ole
2016-01-01
Neuronal oscillations provide a window for understanding the brain dynamics that organize the flow of information from sensory to memory areas. While it has been suggested that gamma power reflects feedforward processing and alpha oscillations feedback control, it remains unknown how these oscillations dynamically interact. Magnetoencephalography (MEG) data was acquired from healthy subjects who were cued to either remember or not remember presented pictures. Our analysis revealed that in anticipation of a picture to be remembered, alpha power decreased while the cross-frequency coupling between gamma power and alpha phase increased. A measure of directionality between alpha phase and gamma power predicted individual ability to encode memory: stronger control of alpha phase over gamma power was associated with better memory. These findings demonstrate that encoding of visual information is reflected by a state determined by the interaction between alpha and gamma activity. PMID:27306959
Formation of visual memories controlled by gamma power phase-locked to alpha oscillations
NASA Astrophysics Data System (ADS)
Park, Hyojin; Lee, Dong Soo; Kang, Eunjoo; Kang, Hyejin; Hahm, Jarang; Kim, June Sic; Chung, Chun Kee; Jiang, Haiteng; Gross, Joachim; Jensen, Ole
2016-06-01
Neuronal oscillations provide a window for understanding the brain dynamics that organize the flow of information from sensory to memory areas. While it has been suggested that gamma power reflects feedforward processing and alpha oscillations feedback control, it remains unknown how these oscillations dynamically interact. Magnetoencephalography (MEG) data was acquired from healthy subjects who were cued to either remember or not remember presented pictures. Our analysis revealed that in anticipation of a picture to be remembered, alpha power decreased while the cross-frequency coupling between gamma power and alpha phase increased. A measure of directionality between alpha phase and gamma power predicted individual ability to encode memory: stronger control of alpha phase over gamma power was associated with better memory. These findings demonstrate that encoding of visual information is reflected by a state determined by the interaction between alpha and gamma activity.
A Robotics-Based Approach to Modeling of Choice Reaching Experiments on Visual Attention
Strauss, Soeren; Heinke, Dietmar
2012-01-01
The paper presents a robotics-based model for choice reaching experiments on visual attention. In these experiments participants were asked to make rapid reach movements toward a target in an odd-color search task, i.e., reaching for a green square among red squares and vice versa (e.g., Song and Nakayama, 2008). Interestingly these studies found that in a high number of trials movements were initially directed toward a distractor and only later were adjusted toward the target. These “curved” trajectories occurred particularly frequently when the target in the directly preceding trial had a different color (priming effect). Our model is embedded in a closed-loop control of a LEGO robot arm aiming to mimic these reach movements. The model is based on our earlier work which suggests that target selection in visual search is implemented through parallel interactions between competitive and cooperative processes in the brain (Heinke and Humphreys, 2003; Heinke and Backhaus, 2011). To link this model with the control of the robot arm we implemented a topological representation of movement parameters following the dynamic field theory (Erlhagen and Schoener, 2002). The robot arm is able to mimic the results of the odd-color search task including the priming effect and also generates human-like trajectories with a bell-shaped velocity profile. Theoretical implications and predictions are discussed in the paper. PMID:22529827
77 FR 47563 - Airworthiness Directives; The Boeing Company Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... inspections for dirt, loose particles, or blockage of the flanged tube and drain hole for the pressure seals... for the E1A and E1B elevator control cable aft pressure seals; doing repetitive inspections for dirt..., depending on airplane configuration, repetitive general visual inspections for dirt, loose particles, and...
NASA Astrophysics Data System (ADS)
Vitali, Ettore; Shi, Hao; Qin, Mingpu; Zhang, Shiwei
2017-12-01
Experiments with ultracold atoms provide a highly controllable laboratory setting with many unique opportunities for precision exploration of quantum many-body phenomena. The nature of such systems, with strong interaction and quantum entanglement, makes reliable theoretical calculations challenging. Especially difficult are excitation and dynamical properties, which are often the most directly relevant to experiment. We carry out exact numerical calculations, by Monte Carlo sampling of imaginary-time propagation of Slater determinants, to compute the pairing gap in the two-dimensional Fermi gas from first principles. Applying state-of-the-art analytic continuation techniques, we obtain the spectral function and the density and spin structure factors providing unique tools to visualize the BEC-BCS crossover. These quantities will allow for a direct comparison with experiments.
Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.
Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist
2013-01-01
This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.
Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon
2014-11-01
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.
Conson, Massimiliano; Mazzarella, Elisabetta; Esposito, Dalila; Grossi, Dario; Marino, Nicoletta; Massagli, Angelo; Frolli, Alessandro
2015-08-01
Embodied cognition theories hold that cognitive processes are grounded in bodily states. Embodied processes in autism spectrum disorders (ASD) have classically been investigated in studies on imitation. Several observations suggested that unlike typical individuals who are able of copying the model's actions from the model's position, individuals with ASD tend to reenact the model's actions from their own egocentric perspective. Here, we performed two behavioral experiments to directly test the ability of ASD individuals to adopt another person's point of view. In Experiment 1, participants had to explicitly judge the left/right location of a target object in a scene from their own or the actor's point of view (visual perspective taking task). In Experiment 2, participants had to perform left/right judgments on front-facing or back-facing human body images (own body transformation task). Both tasks can be solved by mentally simulating one's own body motion to imagine oneself transforming into the position of another person (embodied simulation strategy), or by resorting to visual/spatial processes, such as mental object rotation (nonembodied strategy). Results of both experiments showed that individual with ASD solved the tasks mainly relying on a nonembodied strategy, whereas typical controls adopted an embodied strategy. Moreover, in the visual perspective taking task ASD participants had more difficulties than controls in inhibiting other-perspective when directed to keep one's own point of view. These findings suggested that, in social cognitive tasks, individuals with ASD do not resort to embodied simulation and have difficulties in cognitive control over self- and other-perspective. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Numerical simulation of human orientation perception during lunar landing
NASA Astrophysics Data System (ADS)
Clark, Torin K.; Young, Laurence R.; Stimpson, Alexander J.; Duda, Kevin R.; Oman, Charles M.
2011-09-01
In lunar landing it is necessary to select a suitable landing point and then control a stable descent to the surface. In manned landings, astronauts will play a critical role in monitoring systems and adjusting the descent trajectory through either supervisory control and landing point designations, or by direct manual control. For the astronauts to ensure vehicle performance and safety, they will have to accurately perceive vehicle orientation. A numerical model for human spatial orientation perception was simulated using input motions from lunar landing trajectories to predict the potential for misperceptions. Three representative trajectories were studied: an automated trajectory, a landing point designation trajectory, and a challenging manual control trajectory. These trajectories were studied under three cases with different cues activated in the model to study the importance of vestibular cues, visual cues, and the effect of the descent engine thruster creating dust blowback. The model predicts that spatial misperceptions are likely to occur as a result of the lunar landing motions, particularly with limited or incomplete visual cues. The powered descent acceleration profile creates a somatogravic illusion causing the astronauts to falsely perceive themselves and the vehicle as upright, even when the vehicle has a large pitch or roll angle. When visual pathways were activated within the model these illusions were mostly suppressed. Dust blowback, obscuring the visual scene out the window, was also found to create disorientation. These orientation illusions are likely to interfere with the astronauts' ability to effectively control the vehicle, potentially degrading performance and safety. Therefore suitable countermeasures, including disorientation training and advanced displays, are recommended.
O'Shea, Jacinta; Jensen, Ole; Bergmann, Til O.
2015-01-01
Covertly directing visuospatial attention produces a frequency-specific modulation of neuronal oscillations in occipital and parietal cortices: anticipatory alpha (8–12 Hz) power decreases contralateral and increases ipsilateral to attention, whereas stimulus-induced gamma (>40 Hz) power is boosted contralaterally and attenuated ipsilaterally. These modulations must be under top-down control; however, the control mechanisms are not yet fully understood. Here we investigated the causal contribution of the human frontal eye field (FEF) by combining repetitive transcranial magnetic stimulation (TMS) with subsequent magnetoencephalography. Following inhibitory theta burst stimulation to the left FEF, right FEF, or vertex, participants performed a visual discrimination task requiring covert attention to either visual hemifield. Both left and right FEF TMS caused marked attenuation of alpha modulation in the occipitoparietal cortex. Notably, alpha modulation was consistently reduced in the hemisphere contralateral to stimulation, leaving the ipsilateral hemisphere relatively unaffected. Additionally, right FEF TMS enhanced gamma modulation in left visual cortex. Behaviorally, TMS caused a relative slowing of response times to targets contralateral to stimulation during the early task period. Our results suggest that left and right FEF are causally involved in the attentional top-down control of anticipatory alpha power in the contralateral visual system, whereas a right-hemispheric dominance seems to exist for control of stimulus-induced gamma power. These findings contrast the assumption of primarily intrahemispheric connectivity between FEF and parietal cortex, emphasizing the relevance of interhemispheric interactions. The contralaterality of effects may result from a transient functional reorganization of the dorsal attention network after inhibition of either FEF. PMID:25632139
Visual feedback system to reduce errors while operating roof bolting machines
Steiner, Lisa J.; Burgess-Limerick, Robin; Eiter, Brianna; Porter, William; Matty, Tim
2015-01-01
Problem Operators of roof bolting machines in underground coal mines do so in confined spaces and in very close proximity to the moving equipment. Errors in the operation of these machines can have serious consequences, and the design of the equipment interface has a critical role in reducing the probability of such errors. Methods An experiment was conducted to explore coding and directional compatibility on actual roof bolting equipment and to determine the feasibility of a visual feedback system to alert operators of critical movements and to also alert other workers in close proximity to the equipment to the pending movement of the machine. The quantitative results of the study confirmed the potential for both selection errors and direction errors to be made, particularly during training. Results Subjective data confirmed a potential benefit of providing visual feedback of the intended operations and movements of the equipment. Impact This research may influence the design of these and other similar control systems to provide evidence for the use of warning systems to improve operator situational awareness. PMID:23398703
Aberrant patterns of visual facial information usage in schizophrenia.
Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M
2013-05-01
Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association
Visual and motion cueing in helicopter simulation
NASA Technical Reports Server (NTRS)
Bray, R. S.
1985-01-01
Early experience in fixed-cockpit simulators, with limited field of view, demonstrated the basic difficulties of simulating helicopter flight at the level of subjective fidelity required for confident evaluation of vehicle characteristics. More recent programs, utilizing large-amplitude cockpit motion and a multiwindow visual-simulation system have received a much higher degree of pilot acceptance. However, none of these simulations has presented critical visual-flight tasks that have been accepted by the pilots as the full equivalent of flight. In this paper, the visual cues presented in the simulator are compared with those of flight in an attempt to identify deficiencies that contribute significantly to these assessments. For the low-amplitude maneuvering tasks normally associated with the hover mode, the unique motion capabilities of the Vertical Motion Simulator (VMS) at Ames Research Center permit nearly a full representation of vehicle motion. Especially appreciated in these tasks are the vertical-acceleration responses to collective control. For larger-amplitude maneuvering, motion fidelity must suffer diminution through direct attenuation through high-pass filtering washout of the computer cockpit accelerations or both. Experiments were conducted in an attempt to determine the effects of these distortions on pilot performance of height-control tasks.
Wuehr, M; Schniepp, R; Pradhan, C; Ilmberger, J; Strupp, M; Brandt, T; Jahn, K
2013-01-01
Healthy persons exhibit relatively small temporal and spatial gait variability when walking unimpeded. In contrast, patients with a sensory deficit (e.g., polyneuropathy) show an increased gait variability that depends on speed and is associated with an increased fall risk. The purpose of this study was to investigate the role of vision in gait stabilization by determining the effects of withdrawing visual information (eyes closed) on gait variability at different locomotion speeds. Ten healthy subjects (32.2 ± 7.9 years, 5 women) walked on a treadmill for 5-min periods at their preferred walking speed and at 20, 40, 70, and 80 % of maximal walking speed during the conditions of walking with eyes open (EO) and with eyes closed (EC). The coefficient of variation (CV) and fractal dimension (α) of the fluctuations in stride time, stride length, and base width were computed and analyzed. Withdrawing visual information increased the base width CV for all walking velocities (p < 0.001). The effects of absent visual information on CV and α of stride time and stride length were most pronounced during slow locomotion (p < 0.001) and declined during fast walking speeds. The results indicate that visual feedback control is used to stabilize the medio-lateral (i.e., base width) gait parameters at all speed sections. In contrast, sensory feedback control in the fore-aft direction (i.e., stride time and stride length) depends on speed. Sensory feedback contributes most to fore-aft gait stabilization during slow locomotion, whereas passive biomechanical mechanisms and an automated central pattern generation appear to control fast locomotion.
Experience Report: Visual Programming in the Real World
NASA Technical Reports Server (NTRS)
Baroth, E.; Hartsough, C
1994-01-01
This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.
S-shaped flow curves of shear thickening suspensions: direct observation of frictional rheology.
Pan, Zhongcheng; de Cagny, Henri; Weber, Bart; Bonn, Daniel
2015-09-01
We study the rheological behavior of concentrated granular suspensions of simple spherical particles. Under controlled stress, the system exhibits an S-shaped flow curve (stress vs shear rate) with a negative slope in between the low-viscosity Newtonian regime and the shear thickened regime. Under controlled shear rate, a discontinuous transition between the two states is observed. Stress visualization experiments with a fluorescent probe suggest that friction is at the origin of shear thickening. Stress visualization shows that the stress in the system remains homogeneous (no shear banding) if a stress is imposed that is intermediate between the high- and low-stress branches. The S-shaped shear thickening is then due to the discontinuous formation of a frictional force network between particles upon increasing the stress.
Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G
2016-11-01
Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.
Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y
1997-08-01
Recent neurophysiological studies in alert monkeys have revealed that the parietal association cortex plays a crucial role in depth perception and visually guided hand movement. The following five classes of parietal neurons covering various aspects of these functions have been identified: (1) depth-selective visual-fixation (VF) neurons of the inferior parietal lobule (IPL), representing egocentric distance; (2) depth-movement sensitive (DMS) neurons of V5A and the ventral intraparietal (VIP) area representing direction of linear movement in 3-D space; (3) depth-rotation-sensitive (RS) neurons of V5A and the posterior parietal (PP) area representing direction of rotary movement in space; (4) visually responsive manipulation-related neurons (visual-dominant or visual-and-motor type) of the anterior intraparietal (AIP) area, representing 3-D shape or orientation (or both) of objects for manipulation; and (5) axis-orientation-selective (AOS) and surface-orientation-selective (SOS) neurons in the caudal intraparietal sulcus (cIPS) sensitive to binocular disparity and representing the 3-D orientation of the longitudinal axes and flat surfaces, respectively. Some AOS and SOS neurons are selective in both orientation and shape. Thus the dorsal visual pathway is divided into at least two subsystems, V5A, PP and VIP areas for motion vision and V6, LIP and cIPS areas for coding position and 3-D features. The cIPS sends the signals of 3-D features of objects to the AIP area, which is reciprocally connected to the ventral premotor (F5) area and plays an essential role in matching hand orientation and shaping with 3-D objects for manipulation.
NASA Astrophysics Data System (ADS)
Wilson, John J.; Palaniappan, Ramaswamy
2011-04-01
The steady state visual evoked protocol has recently become a popular paradigm in brain-computer interface (BCI) applications. Typically (regardless of function) these applications offer the user a binary selection of targets that perform correspondingly discrete actions. Such discrete control systems are appropriate for applications that are inherently isolated in nature, such as selecting numbers from a keypad to be dialled or letters from an alphabet to be spelled. However motivation exists for users to employ proportional control methods in intrinsically analogue tasks such as the movement of a mouse pointer. This paper introduces an online BCI in which control of a mouse pointer is directly proportional to a user's intent. Performance is measured over a series of pointer movement tasks and compared to the traditional discrete output approach. Analogue control allowed subjects to move the pointer faster to the cued target location compared to discrete output but suffers more undesired movements overall. Best performance is achieved when combining the threshold to movement of traditional discrete techniques with the range of movement offered by proportional control.
Williams, Camille K; Grierson, Lawrence E M; Carnahan, Heather
2011-08-01
A link between affect and action has been supported by the discovery that threat information is prioritized through an action-centred pathway--the dorsal visual stream. Magnocellular afferents, which originate from the retina and project to dorsal stream structures, are suppressed by exposure to diffuse red light, which diminishes humans' perception of threat-based images. In order to explore the role of colour in the relationship between affect and action, participants donned different pairs of coloured glasses (red, yellow, green, blue and clear) and completed Positive and Negative Affect Scale questionnaires as well as a series of target-directed aiming movements. Analyses of affect scores revealed a significant main effect for affect valence and a significant interaction between colour and valence: perceived positive affect was significantly smaller for the red condition. Kinematic analyses of variable error in the primary movement direction and Pearson correlation analyses between the displacements travelled prior to and following peak velocity indicated reduced accuracy and application of online control processes while wearing red glasses. Variable error of aiming was also positively and significantly correlated with negative affect scores under the red condition. These results suggest that only red light modulates the affect-action link by suppressing magnocellular activity, which disrupts visual processing for movement control. Furthermore, previous research examining the effect of the colour red on psychomotor tasks and perceptual acceleration of threat-based imagery suggest that stimulus-driven motor performance tasks requiring online control may be particularly susceptible to this effect.
Saunders, Jeffrey A.
2014-01-01
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194
Intercepting a moving target: On-line or model-based control?
Zhao, Huaiyong; Warren, William H
2017-05-01
When walking to intercept a moving target, people take an interception path that appears to anticipate the target's trajectory. According to the constant bearing strategy, the observer holds the bearing direction of the target constant based on current visual information, consistent with on-line control. Alternatively, the interception path might be based on an internal model of the target's motion, known as model-based control. To investigate these two accounts, participants walked to intercept a moving target in a virtual environment. We degraded the target's visibility by blurring the target to varying degrees in the midst of a trial, in order to influence its perceived speed and position. Reduced levels of visibility progressively impaired interception accuracy and precision; total occlusion impaired performance most and yielded nonadaptive heading adjustments. Thus, performance strongly depended on current visual information and deteriorated qualitatively when it was withdrawn. The results imply that locomotor interception is normally guided by current information rather than an internal model of target motion, consistent with on-line control.
Horki, Petar; Neuper, Christa; Pfurtscheller, Gert; Müller-Putz, Gernot
2010-12-01
A brain-computer interface (BCI) provides a direct connection between the human brain and a computer. One type of BCI can be realized using steady-state visual evoked potentials (SSVEPs), resulting from repetitive stimulation. The aim of this study was the realization of an asynchronous SSVEP-BCI, based on canonical correlation analysis, suitable for the control of a 2-degrees of freedom (DoF) hand and elbow neuroprosthesis. To determine whether this BCI is suitable for the control of 2-DoF neuroprosthetic devices, online experiments with a virtual and a robotic limb feedback were conducted with eight healthy subjects and one tetraplegic patient. All participants were able to control the artificial limbs with the BCI. In the online experiments, the positive predictive value (PPV) varied between 69% and 83% and the false negative rate (FNR) varied between 1% and 17%. The spinal cord injured patient achieved PPV and FNR values within one standard deviation of the mean for all healthy subjects.
NASA Technical Reports Server (NTRS)
Wehner, R.
1972-01-01
Experimental data, on the visual orientation of desert ants toward astromenotactic courses and horizon landmarks involving the cooperation of different direction finding systems, are given. Attempts were made to: (1) determine if the ants choose a compromise direction between astromenotactic angles and the direction toward horizon landmarks when both angles compete with each other or whether they decide alternatively; (2) analyze adaptations of the visual system to the special demands of direction finding by astromenotactic orientation or pattern recognition; and (3) determine parameters of visual learning behavior. Results show separate orientation mechanisms are responsible for the orientation of the ant toward astromenotactic angles and horizon landmarks. If both systems compete with each other, the ants switch over from one system to the other and do not perform a compromise direction.
Viewing the dynamics and control of visual attention through the lens of electrophysiology
Woodman, Geoffrey F.
2013-01-01
How we find what we are looking for in complex visual scenes is a seemingly simple ability that has taken half a century to unravel. The first study to use the term visual search showed that as the number of objects in a complex scene increases, observers’ reaction times increase proportionally (Green and Anderson, 1956). This observation suggests that our ability to process the objects in the scenes is limited in capacity. However, if it is known that the target will have a certain feature attribute, for example, that it will be red, then only an increase in the number of red items increases reaction time. This observation suggests that we can control which visual inputs receive the benefit of our limited capacity to recognize the objects, such as those defined by the color red, as the items we seek. The nature of the mechanisms that underlie these basic phenomena in the literature on visual search have been more difficult to definitively determine. In this paper, I discuss how electrophysiological methods have provided us with the necessary tools to understand the nature of the mechanisms that give rise to the effects observed in the first visual search paper. I begin by describing how recordings of event-related potentials from humans and nonhuman primates have shown us how attention is deployed to possible target items in complex visual scenes. Then, I will discuss how event-related potential experiments have allowed us to directly measure the memory representations that are used to guide these deployments of attention to items with target-defining features. PMID:23357579
Primary visual response (M100) delays in adolescents with FASD as measured with MEG.
Coffman, Brian A; Kodituwakku, Piyadasa; Kodituwakku, Elizabeth L; Romero, Lucinda; Sharadamma, Nirupama Muniswamy; Stone, David; Stephen, Julia M
2013-11-01
Fetal alcohol spectrum disorders (FASD) are debilitating, with effects of prenatal alcohol exposure persisting into adolescence and adulthood. Complete characterization of FASD is crucial for the development of diagnostic tools and intervention techniques to decrease the high cost to individual families and society of this disorder. In this experiment, we investigated visual system deficits in adolescents (12-21 years) diagnosed with an FASD by measuring the latency of patients' primary visual M100 responses using MEG. We hypothesized that patients with FASD would demonstrate delayed primary visual responses compared to controls. M100 latencies were assessed both for FASD patients and age-matched healthy controls for stimuli presented at the fovea (central stimulus) and at the periphery (peripheral stimuli; left or right of the central stimulus) in a saccade task requiring participants to direct their attention and gaze to these stimuli. Source modeling was performed on visual responses to the central and peripheral stimuli and the latency of the first prominent peak (M100) in the occipital source timecourse was identified. The peak latency of the M100 responses were delayed in FASD patients for both stimulus types (central and peripheral), but the difference in latency of primary visual responses to central vs. peripheral stimuli was significant only in FASD patients, indicating that, while FASD patients' visual systems are impaired in general, this impairment is more pronounced in the periphery. These results suggest that basic sensory deficits in this population may contribute to sensorimotor integration deficits described previously in this disorder. Copyright © 2012 Wiley Periodicals, Inc.
Musical Interfaces: Visualization and Reconstruction of Music with a Microfluidic Two-Phase Flow
Mak, Sze Yi; Li, Zida; Frere, Arnaud; Chan, Tat Chuen; Shum, Ho Cheung
2014-01-01
Detection of sound wave in fluids can hardly be realized because of the lack of approaches to visualize the very minute sound-induced fluid motion. In this paper, we demonstrate the first direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interfaces respond to sound of different frequency and amplitude robustly with sufficiently precise time resolution for the recording of musical notes and even subsequent reconstruction with high fidelity. Our work shows the possibility of sensing and transmitting vibrations as tiny as those induced by sound. This robust control of the interfacial dynamics enables a platform for investigating the mechanical properties of microstructures and for studying frequency-dependent phenomena, for example, in biological systems. PMID:25327509
Acute exercise and aerobic fitness influence selective attention during visual search.
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.
Acute exercise and aerobic fitness influence selective attention during visual search
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
O'Connor, David A; Rossiter, Sarah; Yücel, Murat; Lubman, Dan I; Hester, Robert
2012-09-01
We examined the neural basis of the capacity to resist an immediately rewarding stimulus in order to obtain a larger delayed reward. This was investigated with a Go/No-go task employing No-go targets that provided two types of reward outcomes. These were contingent on inhibitory control performance: failure to inhibit Reward No-go targets provided a small monetary reward with immediate feedback; while successful inhibitory control resulted in larger rewards with delayed feedback based on the highest number of consecutive inhibitions. We observed faster Go trial responses with maintained levels of inhibition accuracy during the Reward No-go condition compared to a neutral No-go condition. Comparisons between conditions of BOLD activity showed successful inhibitory control over rewarding No-Go targets was associated with hypoactivity in regions previously associated with regulating emotion and inhibitory control, including insula and right inferior frontal gyrus. In addition, regions previously associated with visual processing centers that are modulated as a function of visual attention, namely the left fusiform and right superior temporal gyri, were hypoactive. These findings suggest a role for attentional disengagement as an aid to withholding response over a rewarding stimulus and are consistent with the notion that gratification can be delayed by directing attention away from immediate rewards. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
White, Brian J; Marino, Robert A; Boehnke, Susan E; Itti, Laurent; Theeuwes, Jan; Munoz, Douglas P
2013-10-01
The mechanisms that underlie the integration of visual and goal-related signals for the production of saccades remain poorly understood. Here, we examined how spatial proximity of competing stimuli shapes goal-directed responses in the superior colliculus (SC), a midbrain structure closely associated with the control of visual attention and eye movements. Monkeys were trained to perform an oculomotor-capture task [Theeuwes, J., Kramer, A. F., Hahn, S., Irwin, D. E., & Zelinsky, G. J. Influence of attentional capture on oculomotor control. Journal of Experimental Psychology. Human Perception and Performance, 25, 1595-1608, 1999], in which a target singleton was revealed via an isoluminant color change in all but one item. On a portion of the trials, an additional salient item abruptly appeared near or far from the target. We quantified how spatial proximity between the abrupt-onset and the target shaped the goal-directed response. We found that the appearance of an abrupt-onset near the target induced a transient decrease in goal-directed discharge of SC visuomotor neurons. Although this was indicative of spatial competition, it was immediately followed by a rebound in presaccadic activation, which facilitated the saccadic response (i.e., it induced shorter saccadic RT). A similar suppression also occurred at most nontarget locations even in the absence of the abrupt-onset. This is indicative of a mechanism that enabled monkeys to quickly discount stimuli that shared the common nontarget feature. These results reveal a pattern of excitation/inhibition across the SC visuomotor map that acted to facilitate optimal behavior-the short duration suppression minimized the probability of capture by salient distractors, whereas a subsequent boost in accumulation rate ensured a fast goal-directed response. Such nonlinear dynamics should be incorporated into future biologically plausible models of saccade behavior.
Spatial Visualization ability improves with and without studying Technical Drawing.
Contreras, María José; Escrig, Rebeca; Prieto, Gerardo; Elosúa, M Rosa
2018-03-27
The results of several studies suggest that spatial ability can be improved through direct training with tasks similar to those integrated in the tests used to measure the ability. However, there is a greater interest in analyzing the effectiveness of indirect training such as games or of learning subjects that involve spatial processes to a certain extent. Thus, the objective of the present study was to analyze whether the indirect training in Technical Drawing improved the Spatial Visualization ability of Architecture students. For this purpose, a group of students enrolled in Fundamentals of Architecture were administered two tests, a Spatial Visualization task and an Abstract Reasoning task, at the beginning and the end of a semester, after having received training through the subjects "Technical Drawing I: Geometry and Perception" and "Projects I." The results of this group were compared with those of a control group of students enrolled in a Mathematics degree, who were also pre-post evaluated but had not received the training in Technical Drawing. The study showed a significant pre-post improvement in both, Visualization and reasoning. However, this improvement occurred in both groups, thereby concluding that this improvement was not due to indirect training. Furthermore, no significant differences were found between men and women in any of the groups or conditions. These results clarify those of an earlier study where improvement in Visualization after training in Technical Drawing was found but did not include a comparison with a control condition. The control condition has proved to be important in order to consider the limitations of the effect of Technical Drawing on said improvement.
Unsteady steady-states: Central causes of unintentional force drift
Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.
2016-01-01
We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely, the fingertip referent coordinate (RFT) and its apparent stiffness (CFT). The system's state is defined by a point in the {RFT; CFT} space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback, and attempted to maintain this force for 15 s after the feedback was removed. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of RFT and CFT showed that force drop was mostly due to a drift in RFT towards the actual fingertip position. Three analysis techniques, namely, hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong co-variation in RFT and CFT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {RFT; CFT} relative to their average trends also displayed covariation. On the whole the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system towards a low-energy state, and (b) a faster synergic motion of RFT and CFT that tends to stabilize the output fingertip force about the slowly-drifting equilibrium point. PMID:27540726
Unsteady steady-states: central causes of unintentional force drift.
Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M; Latash, Mark L
2016-12-01
We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely the fingertip referent coordinate (R FT ) and its apparent stiffness (C FT ). The system's state is defined by a point in the {R FT ; C FT } space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback and attempted to maintain this force for 15 s after the feedback was removed. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of R FT and C FT showed that force drop was mostly due to a drift in R FT toward the actual fingertip position. Three analysis techniques, namely hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong covariation in R FT and C FT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {R FT ; C FT } relative to their average trends also displayed covariation. On the whole, the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system toward a low-energy state and (b) a faster synergic motion of R FT and C FT that tends to stabilize the output fingertip force about the slowly drifting equilibrium point.
Whitwell, Robert L; Goodale, Melvyn A; Merritt, Kate E; Enns, James T
2018-01-01
The two visual systems hypothesis proposes that human vision is supported by an occipito-temporal network for the conscious visual perception of the world and a fronto-parietal network for visually-guided, object-directed actions. Two specific claims about the fronto-parietal network's role in sensorimotor control have generated much data and controversy: (1) the network relies primarily on the absolute metrics of target objects, which it rapidly transforms into effector-specific frames of reference to guide the fingers, hands, and limbs, and (2) the network is largely unaffected by scene-based information extracted by the occipito-temporal network for those same targets. These two claims lead to the counter-intuitive prediction that in-flight anticipatory configuration of the fingers during object-directed grasping will resist the influence of pictorial illusions. The research confirming this prediction has been criticized for confounding the difference between grasping and explicit estimates of object size with differences in attention, sensory feedback, obstacle avoidance, metric sensitivity, and priming. Here, we address and eliminate each of these confounds. We asked participants to reach out and pick up 3D target bars resting on a picture of the Sander Parallelogram illusion and to make explicit estimates of the length of those bars. Participants performed their grasps without visual feedback, and were permitted to grasp the targets after making their size-estimates to afford them an opportunity to reduce illusory error with haptic feedback. The results show unequivocally that the effect of the illusion is stronger on perceptual judgments than on grasping. Our findings from the normally-sighted population provide strong support for the proposal that human vision is comprised of functionally and anatomically dissociable systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Takiyama, Tomo; Hamasaki, Sawako; Yoshida, Masayuki
2016-01-01
The mudskipper Periophthalmus modestus and the yellowfin goby Acanthogobius flavimanus are gobiid teleosts that both inhabit the intertidal mudflats in estuaries. While P. modestus has an amphibious lifestyle and forages on the exposed mudflat during low tide, the aquatic A. flavimanus can be found at the same mudflat at high tide. This study primarily aimed to elucidate the differential adaptations of these organisms to their respective habitats by comparing visual capacities and motor control in orienting behavior during prey capture. Analyses of retinal ganglion cell topography demonstrated that both species possess an area in the dorsotemporal region of the retina, indicating high acuity in the lower frontal visual field. Additionally, P. modestus has a minor area in the nasal portion of the retina near the optic disc. The horizontally extended specialized area in P. modestus possibly reflects the need for optimized horizontal sight on the exposed mudflat. Behavioral experiments to determine postural and eye direction control when orienting toward the object of interest revealed that these species direct their visual axes to the target situated below eye level just before a rapid approach toward it. A characteristic feature of the orienting behavior of P. modestus was that they aimed at the target by using the specialized retinal area by rotating the eye and lifting the head before jumping to attack the target located above eye level. This behavior could be an adaptation to a terrestrial feeding habitat in which buoyancy is irrelevant. This study provides insights into the adaptive mechanisms of gobiid species and the evolutionary changes enabling them to forage on land. © 2016 S. Karger AG, Basel.
Shooner, Christopher; Kelly, Jenna G.; García-Marín, Virginia; Movshon, J. Anthony; Kiorpes, Lynne
2017-01-01
In amblyopia, a visual disorder caused by abnormal visual experience during development, the amblyopic eye (AE) loses visual sensitivity whereas the fellow eye (FE) is largely unaffected. Binocular vision in amblyopes is often disrupted by interocular suppression. We used 96-electrode arrays to record neurons and neuronal groups in areas V1 and V2 of six female macaque monkeys (Macaca nemestrina) made amblyopic by artificial strabismus or anisometropia in early life, as well as two visually normal female controls. To measure suppressive binocular interactions directly, we recorded neuronal responses to dichoptic stimulation. We stimulated both eyes simultaneously with large sinusoidal gratings, controlling their contrast independently with raised-cosine modulators of different orientations and spatial frequencies. We modeled each eye's receptive field at each cortical site using a difference of Gaussian envelopes and derived estimates of the strength of central excitation and surround suppression. We used these estimates to calculate ocular dominance separately for excitation and suppression. Excitatory drive from the FE dominated amblyopic visual cortex, especially in more severe amblyopes, but suppression from both the FE and AEs was prevalent in all animals. This imbalance created strong interocular suppression in deep amblyopes: increasing contrast in the AE decreased responses at binocular cortical sites. These response patterns reveal mechanisms that likely contribute to the interocular suppression that disrupts vision in amblyopes. SIGNIFICANCE STATEMENT Amblyopia is a developmental visual disorder that alters both monocular vision and binocular interaction. Using microelectrode arrays, we examined binocular interaction in primary visual cortex and V2 of six amblyopic macaque monkeys (Macaca nemestrina) and two visually normal controls. By stimulating the eyes dichoptically, we showed that, in amblyopic cortex, the binocular combination of signals is altered. The excitatory influence of the two eyes is imbalanced to a degree that can be predicted from the severity of amblyopia, whereas suppression from both eyes is prevalent in all animals. This altered balance of excitation and suppression reflects mechanisms that may contribute to the interocular perceptual suppression that disrupts vision in amblyopes. PMID:28743725
Hallum, Luke E; Shooner, Christopher; Kumbhani, Romesh D; Kelly, Jenna G; García-Marín, Virginia; Majaj, Najib J; Movshon, J Anthony; Kiorpes, Lynne
2017-08-23
In amblyopia, a visual disorder caused by abnormal visual experience during development, the amblyopic eye (AE) loses visual sensitivity whereas the fellow eye (FE) is largely unaffected. Binocular vision in amblyopes is often disrupted by interocular suppression. We used 96-electrode arrays to record neurons and neuronal groups in areas V1 and V2 of six female macaque monkeys ( Macaca nemestrina ) made amblyopic by artificial strabismus or anisometropia in early life, as well as two visually normal female controls. To measure suppressive binocular interactions directly, we recorded neuronal responses to dichoptic stimulation. We stimulated both eyes simultaneously with large sinusoidal gratings, controlling their contrast independently with raised-cosine modulators of different orientations and spatial frequencies. We modeled each eye's receptive field at each cortical site using a difference of Gaussian envelopes and derived estimates of the strength of central excitation and surround suppression. We used these estimates to calculate ocular dominance separately for excitation and suppression. Excitatory drive from the FE dominated amblyopic visual cortex, especially in more severe amblyopes, but suppression from both the FE and AEs was prevalent in all animals. This imbalance created strong interocular suppression in deep amblyopes: increasing contrast in the AE decreased responses at binocular cortical sites. These response patterns reveal mechanisms that likely contribute to the interocular suppression that disrupts vision in amblyopes. SIGNIFICANCE STATEMENT Amblyopia is a developmental visual disorder that alters both monocular vision and binocular interaction. Using microelectrode arrays, we examined binocular interaction in primary visual cortex and V2 of six amblyopic macaque monkeys ( Macaca nemestrina ) and two visually normal controls. By stimulating the eyes dichoptically, we showed that, in amblyopic cortex, the binocular combination of signals is altered. The excitatory influence of the two eyes is imbalanced to a degree that can be predicted from the severity of amblyopia, whereas suppression from both eyes is prevalent in all animals. This altered balance of excitation and suppression reflects mechanisms that may contribute to the interocular perceptual suppression that disrupts vision in amblyopes. Copyright © 2017 the authors 0270-6474/17/378216-11$15.00/0.
Many-body coherent destruction of tunneling in photonic lattices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longhi, Stefano
2011-03-15
An optical realization of the phenomenon of many-body coherent destruction of tunneling, recently predicted for interacting many-boson systems by Gong, Molina, and Haenggi [Phys. Rev. Lett. 103, 133002 (2009)], is proposed for light transport in engineered waveguide arrays. The optical system enables a direct visualization in Fock space of the many-body tunneling control process.
Mobile Cubesat Command and Control (Mc3) 3-Meter Dish Calibration and Capabilities
2014-06-01
accuracy of this simple calibration is tested by tracking the sun, an easily accessible celestial body. To track the sun, a Systems Tool Kit ( STK ... visually verified. The shadow created by the dish system when it is pointed directly at the sun is symmetrical. If the dish system is not pointed
Lateralization in Alpha-Band Oscillations Predicts the Locus and Spatial Distribution of Attention.
Ikkai, Akiko; Dandekar, Sangita; Curtis, Clayton E
2016-01-01
Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8-13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target's location, while on others it contained no spatial information. When the target's location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target's location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex.
Sensory-based expert monitoring and control
NASA Astrophysics Data System (ADS)
Yen, Gary G.
1999-03-01
Field operators use their eyes, ears, and nose to detect process behavior and to trigger corrective control actions. For instance: in daily practice, the experienced operator in sulfuric acid treatment of phosphate rock may observe froth color or bubble character to control process material in-flow. Or, similarly, (s)he may use acoustic sound of cavitation or boiling/flashing to increase or decrease material flow rates in tank levels. By contrast, process control computers continue to be limited to taking action on P, T, F, and A signals. Yet, there is sufficient evidence from the fields that visual and acoustic information can be used for control and identification. Smart in-situ sensors have facilitated potential mechanism for factory automation with promising industry applicability. In respond to these critical needs, a generic, structured health monitoring approach is proposed. The system assumes a given sensor suite will act as an on-line health usage monitor and at best provide the real-time control autonomy. The sensor suite can incorporate various types of sensory devices, from vibration accelerometers, directional microphones, machine vision CCDs, pressure gauges to temperature indicators. The decision can be shown in a visual on-board display or fed to the control block to invoke controller reconfigurration.
47 CFR 80.293 - Check bearings by authorized ship personnel.
Code of Federal Regulations, 2010 CFR
2010-10-01
....293 Section 80.293 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... comparison of simultaneous visual and radio direction finder bearings. At least one comparison bearing must... visual bearing relative to the ship's heading and the difference between the visual and radio direction...
The Psychological Cost of Making Control Responses in the Nonstereotype Direction.
Chan, Alan H S; Hoffmann, Errol R
2016-12-01
The aim of this study was to develop a scale for the "psychological cost" of making control responses in the nonstereotype direction. Wickens, Keller, and Small suggested values for the psychological cost arising from having control/display relationships that were not in the common stereotype directions. We provide values of such costs specifically for these situations. Working from data of Chan and Hoffmann for 168 combinations of display location, control type, and display movement direction, we define values for the cost and compare these with the suggested values of Wickens et al.'s Frame of Reference Transformation Tool (FORT) model. We found marked differences between the values of the FORT model and the data of our experiments. The differences arise largely from the effects of the Worringham and Beringer visual field principle not being adequately considered in the previous research. A better indication of the psychological cost for use of incorrect control/display stereotypes is given. It is noted that these costs are applicable only to the factor of stereotype strength and not other factors considered in the FORT model. Effects of having controls and displays that are not arranged to operate with population expectancies can be readily determined from the data in this paper. © 2016, Human Factors and Ergonomics Society.
What and where information in the caudate tail guides saccades to visual objects
Yamamoto, Shinya; Monosov, Ilya E.; Yasuda, Masaharu; Hikosaka, Okihide
2012-01-01
We understand the world by making saccadic eye movements to various objects. However, it is unclear how a saccade can be aimed at a particular object, because two kinds of visual information, what the object is and where it is, are processed separately in the dorsal and ventral visual cortical pathways. Here we provide evidence suggesting that a basal ganglia circuit through the tail of the monkey caudate nucleus (CDt) guides such object-directed saccades. First, many CDt neurons responded to visual objects depending on where and what the objects were. Second, electrical stimulation in the CDt induced saccades whose directions matched the preferred directions of neurons at the stimulation site. Third, many CDt neurons increased their activity before saccades directed to the neurons’ preferred objects and directions in a free-viewing condition. Our results suggest that CDt neurons receive both ‘what’ and ‘where’ information and guide saccades to visual objects. PMID:22875934
Visualization of migration of human cortical neurons generated from induced pluripotent stem cells.
Bamba, Yohei; Kanemura, Yonehiro; Okano, Hideyuki; Yamasaki, Mami
2017-09-01
Neuronal migration is considered a key process in human brain development. However, direct observation of migrating human cortical neurons in the fetal brain is accompanied by ethical concerns and is a major obstacle in investigating human cortical neuronal migration. We established a novel system that enables direct visualization of migrating cortical neurons generated from human induced pluripotent stem cells (hiPSCs). We observed the migration of cortical neurons generated from hiPSCs derived from a control and from a patient with lissencephaly. Our system needs no viable brain tissue, which is usually used in slice culture. Migratory behavior of human cortical neuron can be observed more easily and more vividly by its fluorescence and glial scaffold than that by earlier methods. Our in vitro experimental system provides a new platform for investigating development of the human central nervous system and brain malformation. Copyright © 2017 Elsevier B.V. All rights reserved.
Conscious Action/Zombie Action
Shepherd, Joshua
2015-01-01
Abstract I argue that the neural realizers of experiences of trying (that is, experiences of directing effort towards the satisfaction of an intention) are not distinct from the neural realizers of actual trying (that is, actual effort directed towards the satisfaction of an intention). I then ask how experiences of trying might relate to the perceptual experiences one has while acting. First, I assess recent zombie action arguments regarding conscious visual experience, and I argue that contrary to what some have claimed, conscious visual experience plays a causal role for action control in some circumstances. Second, I propose a multimodal account of the experience of acting. According to this account, the experience of acting is (at the very least) a temporally extended, co‐conscious collection of agentive and perceptual experiences, functionally integrated and structured both by multimodal perceptual processing as well as by what an agent is, at the time, trying to do. PMID:27667859
Between-object and within-object saccade programming in a visual search task.
Vergilino-Perez, Dorine; Findlay, John M
2006-07-01
The role of the perceptual organization of the visual display on eye movement control was examined in two experiments using a task where a two-saccade sequence was directed toward either a single elongated object or three separate shorter objects. In the first experiment, we examined the consequences for the second saccade of a small displacement of the whole display during the first saccade. We found that between-object saccades compensated for the displacement to aim for a target position on the new object whereas within-object saccades did not show compensation but were coded as a fixed motor vector applied irrespective of wherever the preceding saccade landed. In the second experiment, we extended the paradigm to examine saccades performed in different directions. The results suggest that the within-object and between-object saccade distinction is an essential feature of saccadic planning.
37 CFR 202.3 - Registration of copyright.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Class VA: Works of the visual arts. This class includes all published and unpublished pictorial, graphic... permission and under the direction of the Visual Arts Division, the application may be submitted... published photographs after consultation and with the permission and under the direction of the Visual Arts...
Cacciamani, Laura; Likova, Lora T
2017-05-01
The perirhinal cortex (PRC) is a medial temporal lobe structure that has been implicated in not only visual memory in the sighted, but also tactile memory in the blind (Cacciamani & Likova, 2016). It has been proposed that, in the blind, the PRC may contribute to modulation of tactile memory responses that emerge in low-level "visual" area V1 as a result of training-induced cortical reorganization (Likova, 2012, 2015). While some studies in the sighted have indicated that the PRC is indeed structurally and functionally connected to the visual cortex (Clavagnier, Falchier, & Kennedy, 2004; Peterson, Cacciamani, Barense, & Scalf, 2012), the PRC's direct modulation of V1 is unknown-particularly in those who lack the visual input that typically stimulates this region. In the present study, we tested Likova's PRC modulation hypothesis; specifically, we used fMRI to assess the PRC's Granger causal influence on V1 activation in the blind during a tactile memory task. To do so, we trained congenital and acquired blind participants on a unique memory-guided drawing technique previously shown to result in V1 reorganization towards tactile memory representations (Likova, 2012). The tasks (20s each) included: tactile exploration of raised line drawings of faces and objects, tactile memory retrieval via drawing, and a scribble motor/memory control. FMRI before and after a week of the Cognitive-Kinesthetic training on these tasks revealed a significant increase in PRC-to-V1 Granger causality from pre- to post-training during the memory drawing task, but not during the motor/memory control. This increase in causal connectivity indicates that the training strengthened the top-down modulation of visual cortex from the PRC. This is the first study to demonstrate enhanced directed functional connectivity from the PRC to the visual cortex in the blind, implicating the PRC as a potential source of the reorganization towards tactile representations that occurs in V1 in the blind brain (Likova, 2012). Copyright © 2017 Elsevier Inc. All rights reserved.
van Ommen, M M; van Beilen, M; Cornelissen, F W; Smid, H G O M; Knegtering, H; Aleman, A; van Laar, T
2016-06-01
Little is known about visual hallucinations (VH) in psychosis. We investigated the prevalence and the role of bottom-up and top-down processing in VH. The prevailing view is that VH are probably related to altered top-down processing, rather than to distorted bottom-up processing. Conversely, VH in Parkinson's disease are associated with impaired visual perception and attention, as proposed by the Perception and Attention Deficit (PAD) model. Auditory hallucinations (AH) in psychosis, however, are thought to be related to increased attention. Our retrospective database study included 1119 patients with non-affective psychosis and 586 controls. The Community Assessment of Psychic Experiences established the VH rate. Scores on visual perception tests [Degraded Facial Affect Recognition (DFAR), Benton Facial Recognition Task] and attention tests [Response Set-shifting Task, Continuous Performance Test-HQ (CPT-HQ)] were compared between 75 VH patients, 706 non-VH patients and 485 non-VH controls. The lifetime VH rate was 37%. The patient groups performed similarly on cognitive tasks; both groups showed worse perception (DFAR) than controls. Non-VH patients showed worse attention (CPT-HQ) than controls, whereas VH patients did not perform differently. We did not find significant VH-related impairments in bottom-up processing or direct top-down alterations. However, the results suggest a relatively spared attentional performance in VH patients, whereas face perception and processing speed were equally impaired in both patient groups relative to controls. This would match better with the increased attention hypothesis than with the PAD model. Our finding that VH frequently co-occur with AH may support an increased attention-induced 'hallucination proneness'.
Marshall, Tom R; O'Shea, Jacinta; Jensen, Ole; Bergmann, Til O
2015-01-28
Covertly directing visuospatial attention produces a frequency-specific modulation of neuronal oscillations in occipital and parietal cortices: anticipatory alpha (8-12 Hz) power decreases contralateral and increases ipsilateral to attention, whereas stimulus-induced gamma (>40 Hz) power is boosted contralaterally and attenuated ipsilaterally. These modulations must be under top-down control; however, the control mechanisms are not yet fully understood. Here we investigated the causal contribution of the human frontal eye field (FEF) by combining repetitive transcranial magnetic stimulation (TMS) with subsequent magnetoencephalography. Following inhibitory theta burst stimulation to the left FEF, right FEF, or vertex, participants performed a visual discrimination task requiring covert attention to either visual hemifield. Both left and right FEF TMS caused marked attenuation of alpha modulation in the occipitoparietal cortex. Notably, alpha modulation was consistently reduced in the hemisphere contralateral to stimulation, leaving the ipsilateral hemisphere relatively unaffected. Additionally, right FEF TMS enhanced gamma modulation in left visual cortex. Behaviorally, TMS caused a relative slowing of response times to targets contralateral to stimulation during the early task period. Our results suggest that left and right FEF are causally involved in the attentional top-down control of anticipatory alpha power in the contralateral visual system, whereas a right-hemispheric dominance seems to exist for control of stimulus-induced gamma power. These findings contrast the assumption of primarily intrahemispheric connectivity between FEF and parietal cortex, emphasizing the relevance of interhemispheric interactions. The contralaterality of effects may result from a transient functional reorganization of the dorsal attention network after inhibition of either FEF. Copyright © 2015 the authors 0270-6474/15/351638-10$15.00/0.
An intracellular analysis of the visual responses of neurones in cat visual cortex.
Douglas, R J; Martin, K A; Whitteridge, D
1991-01-01
1. Extracellular and intracellular recordings were made from neurones in the visual cortex of the cat in order to compare the subthreshold membrane potentials, reflecting the input to the neurone, with the output from the neurone seen as action potentials. 2. Moving bars and edges, generated under computer control, were used to stimulate the neurones. The membrane potential was digitized and averaged for a number of trials after stripping the action potentials. Comparison of extracellular and intracellular discharge patterns indicated that the intracellular impalement did not alter the neurones' properties. Input resistance of the neurone altered little during stable intracellular recordings (30 min-2 h 50 min). 3. Intracellular recordings showed two distinct patterns of membrane potential changes during optimal visual stimulation. The patterns corresponded closely to the division of S-type (simple) and C-type (complex) receptive fields. Simple cells had a complex pattern of membrane potential fluctuations, involving depolarizations alternating with hyperpolarizations. Complex cells had a simple single sustained plateau of depolarization that was often followed but not preceded by a hyperpolarization. In both simple and complex cells the depolarizations led to action potential discharges. The hyperpolarizations were associated with inhibition of action potential discharge. 4. Stimulating simple cells with non-optimal directions of motion produced little or no hyperpolarization of the membrane in most cases, despite a lack of action potential output. Directional complex cells always produced a single plateau of depolarization leading to action potential discharge in both the optimal and non-optimal directions of motion. The directionality could not be predicted on the basis of the position of the hyperpolarizing inhibitory potentials found in the optimal direction. 5. Stimulation of simple cells with non-optimal orientations occasionally produced slight hyperpolarizations and inhibition of action potential discharge. Complex cells, which had broader orientation tuning than simple cells, could show marked hyperpolarization for non-optimal orientations, but this was not generally the case. 6. The data do not support models of directionality and orientation that rely solely on strong inhibitory mechanisms to produce stimulus selectivity. PMID:1804981
Smelling directions: Olfaction modulates ambiguous visual motion perception
Kuang, Shenbing; Zhang, Tao
2014-01-01
Senses of smells are often accompanied by simultaneous visual sensations. Previous studies have documented enhanced olfactory performance with concurrent presence of congruent color- or shape- related visual cues, and facilitated visual object perception when congruent smells are simultaneously present. These visual object-olfaction interactions suggest the existences of couplings between the olfactory pathway and the visual ventral processing stream. However, it is not known if olfaction can modulate visual motion perception, a function that is related to the visual dorsal stream. We tested this possibility by examining the influence of olfactory cues on the perceptions of ambiguous visual motion signals. We showed that, after introducing an association between motion directions and olfactory cues, olfaction could indeed bias ambiguous visual motion perceptions. Our result that olfaction modulates visual motion processing adds to the current knowledge of cross-modal interactions and implies a possible functional linkage between the olfactory system and the visual dorsal pathway. PMID:25052162
Subscale Flight Testing for Aircraft Loss of Control: Accomplishments and Future Directions
NASA Technical Reports Server (NTRS)
Cox, David E.; Cunningham, Kevin; Jordan, Thomas L.
2012-01-01
Subscale flight-testing provides a means to validate both dynamic models and mitigation technologies in the high-risk flight conditions associated with aircraft loss of control. The Airborne Subscale Transport Aircraft Research (AirSTAR) facility was designed to be a flexible and efficient research facility to address this type of flight-testing. Over the last several years (2009-2011) it has been used to perform 58 research flights with an unmanned, remotely-piloted, dynamically-scaled airplane. This paper will present an overview of the facility and its architecture and summarize the experimental data collected. All flights to date have been conducted within visual range of a safety observer. Current plans for the facility include expanding the test volume to altitudes and distances well beyond visual range. The architecture and instrumentation changes associated with this upgrade will also be presented.
God: Do I have your attention?
Colzato, Lorenza S; van Beest, Ilja; van den Wildenberg, Wery P M; Scorolli, Claudia; Dorchin, Shirley; Meiran, Nachshon; Borghi, Anna M; Hommel, Bernhard
2010-10-01
Religion is commonly defined as a set of rules, developed as part of a culture. Here we provide evidence that practice in following these rules systematically changes the way people attend to visual stimuli, as indicated by the individual sizes of the global precedence effect (better performance to global than to local features). We show that this effect is significantly reduced in Calvinism, a religion emphasizing individual responsibility, and increased in Catholicism and Judaism, religions emphasizing social solidarity. We also show that this effect is long-lasting (still affecting baptized atheists) and that its size systematically varies as a function of the amount and strictness of religious practices. These findings suggest that religious practice induces particular cognitive-control styles that induce chronic, directional biases in the control of visual attention. Copyright 2010 Elsevier B.V. All rights reserved.
Invertebrate neurobiology: visual direction of arm movements in an octopus.
Niven, Jeremy E
2011-03-22
An operant task in which octopuses learn to locate food by a visual cue in a three-choice maze shows that they are capable of integrating visual and mechanosensory information to direct their arm movements to a goal. Copyright © 2011 Elsevier Ltd. All rights reserved.
Latychevskaia, Tatiana; Wicki, Flavio; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner
2016-09-14
Visualizing individual charges confined to molecules and observing their dynamics with high spatial resolution is a challenge for advancing various fields in science, ranging from mesoscopic physics to electron transfer events in biological molecules. We show here that the high sensitivity of low-energy electrons to local electric fields can be employed to directly visualize individual charged adsorbates and to study their behavior in a quantitative way. This makes electron holography a unique probing tool for directly visualizing charge distributions with a sensitivity of a fraction of an elementary charge. Moreover, spatial resolution in the nanometer range and fast data acquisition inherent to lens-less low-energy electron holography allows for direct visual inspection of charge transfer processes.
Maidenbaum, Shachar; Abboud, Sami; Amedi, Amir
2014-04-01
Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Real-time dose calculation and visualization for the proton therapy of ocular tumours
NASA Astrophysics Data System (ADS)
Pfeiffer, Karsten; Bendl, Rolf
2001-03-01
A new real-time dose calculation and visualization was developed as part of the new 3D treatment planning tool OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner Institut Berlin. The implementation resolves the common separation between parameter definition, dose calculation and evaluation and allows a direct examination of the expected dose distribution while adjusting the treatment parameters. The new tool allows the therapist to move the desired dose distribution under visual control in 3D to the appropriate place. The visualization of the resulting dose distribution as a 3D surface model, on any 2D slice or on the surface of specified ocular structures is done automatically when adapting parameters during the planning process. In addition, approximate dose volume histograms may be calculated with little extra time. The dose distribution is calculated and visualized in 200 ms with an accuracy of 6% for the 3D isodose surfaces and 8% for other objects. This paper discusses the advantages and limitations of this new approach.
Differences in gaze anticipation for locomotion with and without vision
Authié, Colas N.; Hilt, Pauline M.; N'Guyen, Steve; Berthoz, Alain; Bennequin, Daniel
2015-01-01
Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition. PMID:26106313
Computer programming for generating visual stimuli.
Bukhari, Farhan; Kurylo, Daniel D
2008-02-01
Critical to vision research is the generation of visual displays with precise control over stimulus metrics. Generating stimuli often requires adapting commercial software or developing specialized software for specific research applications. In order to facilitate this process, we give here an overview that allows nonexpert users to generate and customize stimuli for vision research. We first give a review of relevant hardware and software considerations, to allow the selection of display hardware, operating system, programming language, and graphics packages most appropriate for specific research applications. We then describe the framework of a generic computer program that can be adapted for use with a broad range of experimental applications. Stimuli are generated in the context of trial events, allowing the display of text messages, the monitoring of subject responses and reaction times, and the inclusion of contingency algorithms. This approach allows direct control and management of computer-generated visual stimuli while utilizing the full capabilities of modern hardware and software systems. The flowchart and source code for the stimulus-generating program may be downloaded from www.psychonomic.org/archive.
Higashiyama, A
1992-03-01
Three experiments investigated anisotropic perception of visual angle outdoors. In Experiment 1, scales for vertical and horizontal visual angles ranging from 20 degrees to 80 degrees were constructed with the method of angle production (in which the subject reproduced a visual angle with a protractor) and the method of distance production (in which the subject produced a visual angle by adjusting viewing distance). In Experiment 2, scales for vertical and horizontal visual angles of 5 degrees-30 degrees were constructed with the method of angle production and were compared with scales for orientation in the frontal plane. In Experiment 3, vertical and horizontal visual angles of 3 degrees-80 degrees were judged with the method of verbal estimation. The main results of the experiments were as follows: (1) The obtained angles for visual angle are described by a quadratic equation, theta' = a + b theta + c theta 2 (where theta is the visual angle; theta', the obtained angle; a, b, and c, constants). (2) The linear coefficient b is larger than unity and is steeper for vertical direction than for horizontal direction. (3) The quadratic coefficient c is generally smaller than zero and is negatively larger for vertical direction than for horizontal direction. And (4) the obtained angle for visual angle is larger than that for orientation. From these results, it was possible to predict the horizontal-vertical illusion, over-constancy of size, and the moon illusion.
fMRI of parents of children with Asperger Syndrome: a pilot study.
Baron-Cohen, Simon; Ring, Howard; Chitnis, Xavier; Wheelwright, Sally; Gregory, Lloyd; Williams, Steve; Brammer, Mick; Bullmore, Ed
2006-06-01
People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the 'broader autism phenotype.' (1) To test if parents of children with AS show atypical brain activity during a visual search and an empathy task; (2) to test for sex differences during these tasks at the neural level; (3) to test if parents of children with autism are hyper-masculinized, as might be predicted by the 'extreme male brain' theory. We used fMRI during a visual search task (the Embedded Figures Test (EFT)) and an emotion recognition test (the 'Reading the Mind in the Eyes' (or Eyes) test). Twelve parents of children with AS, vs. 12 sex-matched controls. Factorial analysis was used to map main effects of sex, group (parents vs. controls), and sexxgroup interaction on brain function. An ordinal ANOVA also tested for regions of brain activity where females>males>fathers=mothers, to test for parental hyper-masculinization. RESULTS ON EFT TASK: Female controls showed more activity in extrastriate cortex than male controls, and both mothers and fathers showed even less activity in this area than sex-matched controls. There were no differences in group activation between mothers and fathers of children with AS. The ordinal ANOVA identified two specific regions in visual cortex (right and left, respectively) that showed the pattern Females>Males>Fathers=Mothers, both in BA 19. RESULTS ON EYES TASK: Male controls showed more activity in the left inferior frontal gyrus than female controls, and both mothers and fathers showed even more activity in this area compared to sex-matched controls. Female controls showed greater bilateral inferior frontal activation than males. This was not seen when comparing mothers to males, or mothers to fathers. The ordinal ANOVA identified two specific regions that showed the pattern Females>Males>Mothers=Fathers: left medial temporal gyrus (BA 21) and left dorsolateral prefrontal cortex (BA 44). Parents of children with AS show atypical brain function during both visual search and emotion recognition, in the direction of hyper-masculinization of the brain. Because of the small sample size, and lack of age-matching between parents and controls, such results constitute a pilot study that needs replicating with larger samples.
Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.
Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan
2018-05-23
Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison. Copyright © 2018 the authors 0270-6474/18/384996-12$15.00/0.
Alekseichuk, Ivan; Diers, Kersten; Paulus, Walter; Antal, Andrea
2016-10-15
The aim of this study was to investigate if the blood oxygenation level-dependent (BOLD) changes in the visual cortex can be used as biomarkers reflecting the online and offline effects of transcranial electrical stimulation (tES). Anodal transcranial direct current stimulation (tDCS) and 10Hz transcranial alternating current stimulation (tACS) were applied for 10min duration over the occipital cortex of healthy adults during the presentation of different visual stimuli, using a crossover, double-blinded design. Control experiments were also performed, in which sham stimulation as well as another electrode montage were used. Anodal tDCS over the visual cortex induced a small but significant further increase in BOLD response evoked by a visual stimulus; however, no aftereffect was observed. Ten hertz of tACS did not result in an online effect, but in a widespread offline BOLD decrease over the occipital, temporal, and frontal areas. These findings demonstrate that tES during visual perception affects the neuronal metabolism, which can be detected with functional magnetic resonance imaging (fMRI). Copyright © 2016 Elsevier Inc. All rights reserved.
Eye-Catching Odors: Olfaction Elicits Sustained Gazing to Faces and Eyes in 4-Month-Old Infants
Lewkowicz, David J.; Goubet, Nathalie; Schaal, Benoist
2013-01-01
This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues. PMID:24015175
NASA Astrophysics Data System (ADS)
Mann, Christopher; Narasimhamurthi, Natarajan
1998-08-01
This paper discusses a specific implementation of a web and complement based simulation systems. The overall simulation container is implemented within a web page viewed with Microsoft's Internet Explorer 4.0 web browser. Microsoft's ActiveX/Distributed Component Object Model object interfaces are used in conjunction with the Microsoft DirectX graphics APIs to provide visualization functionality for the simulation. The MathWorks' Matlab computer aided control system design program is used as an ActiveX automation server to provide the compute engine for the simulations.
Global motion perception deficits in autism are reflected as early as primary visual cortex.
Robertson, Caroline E; Thomas, Cibu; Kravitz, Dwight J; Wallace, Gregory L; Baron-Cohen, Simon; Martin, Alex; Baker, Chris I
2014-09-01
Individuals with autism are often characterized as 'seeing the trees, but not the forest'-attuned to individual details in the visual world at the expense of the global percept they compose. Here, we tested the extent to which global processing deficits in autism reflect impairments in (i) primary visual processing; or (ii) decision-formation, using an archetypal example of global perception, coherent motion perception. In an event-related functional MRI experiment, 43 intelligence quotient and age-matched male participants (21 with autism, age range 15-27 years) performed a series of coherent motion perception judgements in which the amount of local motion signals available to be integrated into a global percept was varied by controlling stimulus viewing duration (0.2 or 0.6 s) and the proportion of dots moving in the correct direction (coherence: 4%, 15%, 30%, 50%, or 75%). Both typical participants and those with autism evidenced the same basic pattern of accuracy in judging the direction of motion, with performance decreasing with reduced coherence and shorter viewing durations. Critically, these effects were exaggerated in autism: despite equal performance at the long duration, performance was more strongly reduced by shortening viewing duration in autism (P < 0.015) and decreasing stimulus coherence (P < 0.008). To assess the neural correlates of these effects we focused on the responses of primary visual cortex and the middle temporal area, critical in the early visual processing of motion signals, as well as a region in the intraparietal sulcus thought to be involved in perceptual decision-making. The behavioural results were mirrored in both primary visual cortex and the middle temporal area, with a greater reduction in response at short, compared with long, viewing durations in autism compared with controls (both P < 0.018). In contrast, there was no difference between the groups in the intraparietal sulcus (P > 0.574). These findings suggest that reduced global motion perception in autism is driven by an atypical response early in visual processing and may reflect a fundamental perturbation in neural circuitry. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan
2017-01-01
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964
Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan
2017-09-10
In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.
Nakashima, Ryoichi; Shioiri, Satoshi
2014-01-01
Why do we frequently fixate an object of interest presented peripherally by moving our head as well as our eyes, even when we are capable of fixating the object with an eye movement alone (lateral viewing)? Studies of eye-head coordination for gaze shifts have suggested that the degree of eye-head coupling could be determined by an unconscious weighing of the motor costs and benefits of executing a head movement. The present study investigated visual perceptual effects of head direction as an additional factor impacting on a cost-benefit organization of eye-head control. Three experiments using visual search tasks were conducted, manipulating eye direction relative to head orientation (front or lateral viewing). Results show that lateral viewing increased the time required to detect a target in a search for the letter T among letter L distractors, a serial attentive search task, but not in a search for T among letter O distractors, a parallel preattentive search task (Experiment 1). The interference could not be attributed to either a deleterious effect of lateral gaze on the accuracy of saccadic eye movements, nor to potentially problematic optical effects of binocular lateral viewing, because effect of head directions was obtained under conditions in which the task was accomplished without saccades (Experiment 2), and during monocular viewing (Experiment 3). These results suggest that a difference between the head and eye directions interferes with visual processing, and that the interference can be explained by the modulation of attention by the relative positions of the eyes and head (or head direction). PMID:24647634
Sheremata, Summer L; Somers, David C; Shomstein, Sarah
2018-02-07
Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. While both require selection of information across the visual field, memory additionally requires the maintenance of information across time and distraction. VSTM recruits areas within human (male and female) dorsal and ventral parietal cortex that are also implicated in spatial selection; therefore, it is important to determine whether overlapping activation might reflect shared attentional demands. Here, identical stimuli and controlled sustained attention across both tasks were used to ask whether fMRI signal amplitude, functional connectivity, and contralateral visual field bias reflect memory-specific task demands. While attention and VSTM activated similar cortical areas, BOLD amplitude and functional connectivity in parietal cortex differentiated the two tasks. Relative to attention, VSTM increased BOLD amplitude in dorsal parietal cortex and decreased BOLD amplitude in the angular gyrus. Additionally, the tasks differentially modulated parietal functional connectivity. Contrasting VSTM and attention, intraparietal sulcus (IPS) 1-2 were more strongly connected with anterior frontoparietal areas and more weakly connected with posterior regions. This divergence between tasks demonstrates that parietal activation reflects memory-specific functions and consequently modulates functional connectivity across the cortex. In contrast, both tasks demonstrated hemispheric asymmetries for spatial processing, exhibiting a stronger contralateral visual field bias in the left versus the right hemisphere across tasks, suggesting that asymmetries are characteristic of a shared selection process in IPS. These results demonstrate that parietal activity and patterns of functional connectivity distinguish VSTM from more general attention processes, establishing a central role of the parietal cortex in maintaining visual information. SIGNIFICANCE STATEMENT Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. Cognitive mechanisms and neural activity underlying these tasks show a large degree of overlap. To examine whether activity within the posterior parietal cortex (PPC) reflects object maintenance across distraction or sustained attention per se, it is necessary to control for attentional demands inherent in VSTM tasks. We demonstrate that activity in PPC reflects VSTM demands even after controlling for attention; remembering items across distraction modulates relationships between parietal and other areas differently than during periods of sustained attention. Our study fills a gap in the literature by directly comparing and controlling for overlap between visual attention and VSTM tasks. Copyright © 2018 the authors 0270-6474/18/381511-09$15.00/0.
Seeing the hand while reaching speeds up on-line responses to a sudden change in target position
Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre
2009-01-01
Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067
Three Types of Cortical L5 Neurons that Differ in Brain-Wide Connectivity and Function
Kim, Euiseok J.; Juavinett, Ashley L.; Kyubwa, Espoir M.; Jacobs, Matthew W.; Callaway, Edward M.
2015-01-01
SUMMARY Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. PMID:26671462
Three Types of Cortical Layer 5 Neurons That Differ in Brain-wide Connectivity and Function.
Kim, Euiseok J; Juavinett, Ashley L; Kyubwa, Espoir M; Jacobs, Matthew W; Callaway, Edward M
2015-12-16
Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology, and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. Copyright © 2015 Elsevier Inc. All rights reserved.
cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets
Le Muzic, Mathieu; Autin, Ludovic; Parulek, Julius; Viola, Ivan
2017-01-01
In this article we introduce cellVIEW, a new system to interactively visualize large biomolecular datasets on the atomic level. Our tool is unique and has been specifically designed to match the ambitions of our domain experts to model and interactively visualize structures comprised of several billions atom. The cellVIEW system integrates acceleration techniques to allow for real-time graphics performance of 60 Hz display rate on datasets representing large viruses and bacterial organisms. Inspired by the work of scientific illustrators, we propose a level-of-detail scheme which purpose is two-fold: accelerating the rendering and reducing visual clutter. The main part of our datasets is made out of macromolecules, but it also comprises nucleic acids strands which are stored as sets of control points. For that specific case, we extend our rendering method to support the dynamic generation of DNA strands directly on the GPU. It is noteworthy that our tool has been directly implemented inside a game engine. We chose to rely on a third party engine to reduce software development work-load and to make bleeding-edge graphics techniques more accessible to the end-users. To our knowledge cellVIEW is the only suitable solution for interactive visualization of large bimolecular landscapes on the atomic level and is freely available to use and extend. PMID:29291131
Micro and regular saccades across the lifespan during a visual search of "Where's Waldo" puzzles.
Port, Nicholas L; Trimberger, Jane; Hitzeman, Steve; Redick, Bryan; Beckerman, Stephen
2016-01-01
Despite the fact that different aspects of visual-motor control mature at different rates and aging is associated with declines in both sensory and motor function, little is known about the relationship between microsaccades and either development or aging. Using a sample of 343 individuals ranging in age from 4 to 66 and a task that has been shown to elicit a high frequency of microsaccades (solving Where's Waldo puzzles), we explored microsaccade frequency and kinematics (main sequence curves) as a function of age. Taking advantage of the large size of our dataset (183,893 saccades), we also address (a) the saccade amplitude limit at which video eye trackers are able to accurately measure microsaccades and (b) the degree and consistency of saccade kinematics at varying amplitudes and directions. Using a modification of the Engbert-Mergenthaler saccade detector, we found that even the smallest amplitude movements (0.25-0.5°) demonstrate basic saccade kinematics. With regard to development and aging, both microsaccade and regular saccade frequency exhibited a very small increase across the life span. Visual search ability, as per many other aspects of visual performance, exhibited a U-shaped function over the lifespan. Finally, both large horizontal and moderate vertical directional biases were detected for all saccade sizes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Anodal tDCS to V1 blocks visual perceptual learning consolidation.
Peters, Megan A K; Thompson, Benjamin; Merabet, Lotfi B; Wu, Allan D; Shams, Ladan
2013-06-01
This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Capture of visual direction in dynamic vergence is reduced with flashed monocular lines.
Jaschinski, Wolfgang; Jainta, Stephanie; Schürer, Michael
2006-08-01
The visual direction of a continuously presented monocular object is captured by the visual direction of a closely adjacent binocular object, which questions the reliability of nonius lines for measuring vergence. This was shown by Erkelens, C. J., and van Ee, R. (1997a,b) [Capture of the visual direction: An unexpected phenomenon in binocular vision. Vision Research, 37, 1193-1196; Capture of the visual direction of monocular objects by adjacent binocular objects. Vision Research, 37, 1735-1745] stimulating dynamic vergence by a counter phase oscillation of two square random-dot patterns (one to each eye) that contained a smaller central dot-free gap (of variable width) with a vertical monocular line oscillating in phase with the random-dot pattern of the respective eye; subjects adjusted the motion-amplitude of the line until it was perceived as (nearly) stationary. With a continuously presented monocular line, we replicated capture of visual direction provided the dot-free gap was narrow: the adjusted motion-amplitude of the line was similar as the motion-amplitude of the random-dot pattern, although large vergence errors occurred. However, when we flashed the line for 67 ms at the moments of maximal and minimal disparity of the vergence stimulus, we found that the adjusted motion-amplitude of the line was smaller; thus, the capture effect appeared to be reduced with flashed nonius lines. Accordingly, we found that the objectively measured vergence gain was significantly correlated (r=0.8) with the motion-amplitude of the flashed monocular line when the separation between the line and the fusion contour was at least 32 min arc. In conclusion, if one wishes to estimate the dynamic vergence response with psychophysical methods, effects of capture of visual direction can be reduced by using flashed nonius lines.
Redundancy reduction explains the expansion of visual direction space around the cardinal axes.
Perrone, John A; Liston, Dorion B
2015-06-01
Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evidence for discrete landmark use by pigeons during homing.
Mora, Cordula V; Ross, Jeremy D; Gorsevski, Peter V; Chowdhury, Budhaditya; Bingman, Verner P
2012-10-01
Considerable efforts have been made to investigate how homing pigeons (Columba livia f. domestica) are able to return to their loft from distant, unfamiliar sites while the mechanisms underlying navigation in familiar territory have received less attention. With the recent advent of global positioning system (GPS) data loggers small enough to be carried by pigeons, the role of visual environmental features in guiding navigation over familiar areas is beginning to be understood, yet, surprisingly, we still know very little about whether homing pigeons can rely on discrete, visual landmarks to guide navigation. To assess a possible role of discrete, visual landmarks in navigation, homing pigeons were first trained to home from a site with four wind turbines as salient landmarks as well as from a control site without any distinctive, discrete landmark features. The GPS-recorded flight paths of the pigeons on the last training release were straighter and more similar among birds from the turbine site compared with those from the control site. The pigeons were then released from both sites following a clock-shift manipulation. Vanishing bearings from the turbine site continued to be homeward oriented as 13 of 14 pigeons returned home. By contrast, at the control site the vanishing bearings were deflected in the expected clock-shift direction and only 5 of 13 pigeons returned home. Taken together, our results offer the first strong evidence that discrete, visual landmarks are one source of spatial information homing pigeons can utilize to navigate when flying over a familiar area.
Integration of Visual and Joint Information to Enable Linear Reaching Motions
NASA Astrophysics Data System (ADS)
Eberle, Henry; Nasuto, Slawomir J.; Hayashi, Yoshikatsu
2017-01-01
A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories.
An independent brain-computer interface using covert non-spatial visual selective attention
NASA Astrophysics Data System (ADS)
Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K.; Gao, Shangkai
2010-02-01
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 ± 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
Interfacial instability of wormlike micellar solutions sheared in a Taylor-Couette cell
NASA Astrophysics Data System (ADS)
Mohammadigoushki, Hadi; Muller, Susan J.
2014-11-01
We report experiments on wormlike micellar solutions sheared in a custom-made Taylor-Couette (TC) cell. The computer controlled TC cell allows us to rotate both cylinders independently. Wormlike micellar solutions containing water, CTAB, and NaNo3 with different compositions are highly elastic and exhibit shear banding. We visualized the flow field in the θ-z as well as r-z planes, using multiple cameras. When subject to low shear rates, the flow is stable and azimuthal, but becomes unstable above a certain threshold shear rate. This shear rate coincides with the onset of shear banding. Visualizing the θ-z plane shows that this instability is characterized by stationary bands equally spaced in the z direction. Increasing the shear rate results to larger wave lengths. Above a critical shear rate, experiments reveal a chaotic behavior reminiscent of elastic turbulence. We also studied the effect of ramp speed on the onset of instability and report an acceleration below which the critical Weissenberg number for onset of instability is unaffected. Moreover, visualizations in the r-z direction reveals that the interface between the two bands undulates with shear bands evolving towards the outer cylinder regardless of which cylinder is rotating.
Pitch perception deficits in nonverbal learning disability.
Fernández-Prieto, I; Caprile, C; Tinoco-González, D; Ristol-Orriols, B; López-Sala, A; Póo-Argüelles, P; Pons, F; Navarra, J
2016-12-01
The nonverbal learning disability (NLD) is a neurological dysfunction that affects cognitive functions predominantly related to the right hemisphere such as spatial and abstract reasoning. Previous evidence in healthy adults suggests that acoustic pitch (i.e., the relative difference in frequency between sounds) is, under certain conditions, encoded in specific areas of the right hemisphere that also encode the spatial elevation of external objects (e.g., high vs. low position). Taking this evidence into account, we explored the perception of pitch in preadolescents and adolescents with NLD and in a group of healthy participants matched by age, gender, musical knowledge and handedness. Participants performed four speeded tests: a stimulus detection test and three perceptual categorization tests based on colour, spatial position and pitch. Results revealed that both groups were equally fast at detecting visual targets and categorizing visual stimuli according to their colour. In contrast, the NLD group showed slower responses than the control group when categorizing space (direction of a visual object) and pitch (direction of a change in sound frequency). This pattern of results suggests the presence of a subtle deficit at judging pitch in NLD along with the traditionally-described difficulties in spatial processing. Copyright © 2016. Published by Elsevier Ltd.
Adaptation and visual salience
McDermott, Kyle C.; Malkoc, Gokhan; Mulligan, Jeffrey B.; Webster, Michael A.
2011-01-01
We examined how the salience of color is affected by adaptation to different color distributions. Observers searched for a color target on a dense background of distractors varying along different directions in color space. Prior adaptation to the backgrounds enhanced search on the same background while adaptation to orthogonal background directions slowed detection. Advantages of adaptation were seen for both contrast adaptation (to different color axes) and chromatic adaptation (to different mean chromaticities). Control experiments, including analyses of eye movements during the search, suggest that these aftereffects are unlikely to reflect simple learning or changes in search strategies on familiar backgrounds, and instead result from how adaptation alters the relative salience of the target and background colors. Comparable effects were observed along different axes in the chromatic plane or for axes defined by different combinations of luminance and chromatic contrast, consistent with visual search and adaptation mediated by multiple color mechanisms. Similar effects also occurred for color distributions characteristic of natural environments with strongly selective color gamuts. Our results are consistent with the hypothesis that adaptation may play an important functional role in highlighting the salience of novel stimuli by discounting ambient properties of the visual environment. PMID:21106682
An independent brain-computer interface using covert non-spatial visual selective attention.
Zhang, Dan; Maye, Alexander; Gao, Xiaorong; Hong, Bo; Engel, Andreas K; Gao, Shangkai
2010-02-01
In this paper, a novel independent brain-computer interface (BCI) system based on covert non-spatial visual selective attention of two superimposed illusory surfaces is described. Perception of two superimposed surfaces was induced by two sets of dots with different colors rotating in opposite directions. The surfaces flickered at different frequencies and elicited distinguishable steady-state visual evoked potentials (SSVEPs) over parietal and occipital areas of the brain. By selectively attending to one of the two surfaces, the SSVEP amplitude at the corresponding frequency was enhanced. An online BCI system utilizing the attentional modulation of SSVEP was implemented and a 3-day online training program with healthy subjects was carried out. The study was conducted with Chinese subjects at Tsinghua University, and German subjects at University Medical Center Hamburg-Eppendorf (UKE) using identical stimulation software and equivalent technical setup. A general improvement of control accuracy with training was observed in 8 out of 18 subjects. An averaged online classification accuracy of 72.6 +/- 16.1% was achieved on the last training day. The system renders SSVEP-based BCI paradigms possible for paralyzed patients with substantial head or ocular motor impairments by employing covert attention shifts instead of changing gaze direction.
Audiovisual associations alter the perception of low-level visual motion
Kafaligonul, Hulusi; Oluk, Can
2015-01-01
Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less
Still holding after all these years: An action-perception dissociation in patient DF.
Ganel, Tzvi; Goodale, Melvyn A
2017-09-23
Patient DF, who has bilateral damage in the ventral visual stream, is perhaps the best known individual with visual form agnosia in the world, and has been the focus of scores of research papers over the past twenty-five years. The remarkable dissociation she exhibits between a profound deficit in perceptual report and a preserved ability to generate relatively normal visuomotor behaviour was early on a cornerstone in Goodale and Milner's (1992) two visual systems hypothesis. In recent years, however, there has been a greater emphasis on the damage that is evident in the posterior regions of her parietal cortex in both hemispheres. Deficits in several aspects of visuomotor control in the visual periphery have been demonstrated, leading some researchers to conclude that the double dissociation between vision-for-perception and vision-for-action in DF and patients with classic optic ataxia can no longer be assumed to be strong evidence for the division of labour between the dorsal and ventral streams of visual processing. In this short review, we argue that this is not the case. Indeed, after evaluating DF's performance and the location of her brain lesions, a clear picture of a double dissociation between DF and patients with optic ataxia is revealed. More than quarter of a century after the initial presentation of DF's unique case, she continues to provide compelling evidence for the idea that the ventral stream is critical for the perception of the shape and orientation of objects but not the visual control of skilled actions directed at those objects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ikeda, Akitsu; Miyamoto, Jun J; Usui, Nobuo; Taira, Masato; Moriyama, Keiji
2018-01-01
Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain's reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze direction bias, or gaze duration bias. In conclusion, chewing stimulation reduced subjective appetite and attentional bias to food, particularly initial attentional orientation to food. These findings suggest that chewing stimulation, even without taste, odor, or ingestion, may affect reward circuits and help prevent impulsive eating.
Ikeda, Akitsu; Miyamoto, Jun J.; Usui, Nobuo; Taira, Masato; Moriyama, Keiji
2018-01-01
Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain’s reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze direction bias, or gaze duration bias. In conclusion, chewing stimulation reduced subjective appetite and attentional bias to food, particularly initial attentional orientation to food. These findings suggest that chewing stimulation, even without taste, odor, or ingestion, may affect reward circuits and help prevent impulsive eating. PMID:29472880
Voluntarily controlled but not merely observed visual feedback affects postural sway
Asai, Tomohisa; Hiromitsu, Kentaro; Imamizu, Hiroshi
2018-01-01
Online stabilization of human standing posture utilizes multisensory afferences (e.g., vision). Whereas visual feedback of spontaneous postural sway can stabilize postural control especially when observers concentrate on their body and intend to minimize postural sway, the effect of intentional control of visual feedback on postural sway itself remains unclear. This study assessed quiet standing posture in healthy adults voluntarily controlling or merely observing visual feedback. The visual feedback (moving square) had either low or high gain and was either horizontally flipped or not. Participants in the voluntary-control group were instructed to minimize their postural sway while voluntarily controlling visual feedback, whereas those in the observation group were instructed to minimize their postural sway while merely observing visual feedback. As a result, magnified and flipped visual feedback increased postural sway only in the voluntary-control group. Furthermore, regardless of the instructions and feedback manipulations, the experienced sense of control over visual feedback positively correlated with the magnitude of postural sway. We suggest that voluntarily controlled, but not merely observed, visual feedback is incorporated into the feedback control system for posture and begins to affect postural sway. PMID:29682421
Matsui, Teppei; Ohki, Kenichi
2013-01-01
Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987
Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J
2014-04-01
Contemporary data on visual memory and learning in survivors born extremely preterm (EP; <28 weeks gestation) or with extremely low birth weight (ELBW; <1,000 g) are lacking. Geographically determined cohort study of 298 consecutive EP/ELBW survivors born in 1991 and 1992, and 262 randomly selected normal-birth-weight controls. Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ <70. Male EP/ELBW adolescents or those treated with corticosteroids had poorer outcomes. EP/ELBW adolescents have poorer visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.
Bexander, Catharina S M; Hodges, Paul W
2012-03-01
People with whiplash-associated disorders (WAD) not only suffer from neck/head pain, but commonly report deficits in eye movement control. Recent work has highlighted a strong relationship between eye and neck muscle activation in pain-free subjects. It is possible that WAD may disrupt the intricate coordination between eye and neck movement. Electromyographic activity (EMG) of muscles that rotate the cervical spine to the right (left sternocleidomastoid, right obliquus capitis inferior (OI), right splenius capitis (SC) and right multifidus (MF)) was recorded in nine people with chronic WAD. Cervical rotation was performed with five gaze conditions involving different gaze directions relative to cervical rotation. The relationship between eye position/movement and neck muscle activity was contrasted with previous observations from pain-free controls. Three main differences were observed in WAD. First, the superficial muscle SC was active with both directions of cervical rotation in contrast to activity only with right rotation in pain-free controls. Second, activity of OI and MF varied between directions of cervical rotation, unlike the non-direction-specific activity in controls. Third, the effect of horizontal gaze direction on neck muscle EMG was augmented compared to controls. These observations provide evidence of redistribution of activity between neck muscles during cervical rotation and increased interaction between eye and neck muscle activity in people with WAD. These changes in cervico-ocular coordination may underlie clinical symptoms reported by people with WAD that involve visual deficits and changes in function during cervical rotation such as postural control.
Funk, Agnes P; Rosa, Marcello G P
1998-01-01
The first (V1) and second (V2) cortical visual areas exist in all mammals. However, the functional relationship between these areas varies between species. While in monkeys the responses of V2 cells depend on inputs from V1, in all non-primates studied so far V2 cells largely retain responsiveness to photic stimuli after destruction of V1.We studied the visual responsiveness of neurones in V2 of flying foxes after total or partial lesions of the primary visual cortex (V1). The main finding was that visual responses can be evoked in the region of V2 corresponding, in visuotopic co-ordinates, to the lesioned portion of V1 (‘lesion projection zone’; LPZ).The visuotopic organization of V2 was not altered by V1 lesions.The proportion of neurones with strong visual responses was significantly lower within the LPZs (31.5 %) than outside these zones, or in non-lesioned control hemispheres (> 70 %). LPZ cells showed weak direction and orientation bias, and responded consistently only at low spatial and temporal frequencies.The data demonstrate that the functional relationship between V1 and V2 of flying foxes resembles that observed in non-primate mammals. This observation contrasts with the ‘primate-like’ characteristics of the flying fox visual system reported by previous studies. PMID:9806999
Wang, Xi-fen; Zhou, Huai-chun
2005-01-01
The control of 3-D temperature distribution in a utility boiler furnace is essential for the safe, economic and clean operation of pc-fired furnace with multi-burner system. The development of the visualization of 3-D temperature distributions in pc-fired furnaces makes it possible for a new combustion control strategy directly with the furnace temperature as its goal to improve the control quality for the combustion processes. Studied in this paper is such a new strategy that the whole furnace is divided into several parts in the vertical direction, and the average temperature and its bias from the center in every cross section can be extracted from the visualization results of the 3-D temperature distributions. In the simulation stage, a computational fluid dynamics (CFD) code served to calculate the 3-D temperature distributions in a furnace, then a linear model was set up to relate the features of the temperature distributions with the input of the combustion processes, such as the flow rates of fuel and air fed into the furnaces through all the burners. The adaptive genetic algorithm was adopted to find the optimal combination of the whole input parameters which ensure to form an optimal 3-D temperature field in the furnace desired for the operation of boiler. Simulation results showed that the strategy could soon find the factors making the temperature distribution apart from the optimal state and give correct adjusting suggestions.
Illusory motion reversal is caused by rivalry, not by perceptual snapshots of the visual field.
Kline, Keith; Holcombe, Alex O; Eagleman, David M
2004-10-01
In stroboscopic conditions--such as motion pictures--rotating objects may appear to rotate in the reverse direction due to under-sampling (aliasing). A seemingly similar phenomenon occurs in constant sunlight, which has been taken as evidence that the visual system processes discrete "snapshots" of the outside world. But if snapshots are indeed taken of the visual field, then when a rotating drum appears to transiently reverse direction, its mirror image should always appeared to reverse direction simultaneously. Contrary to this hypothesis, we found that when observers watched a rotating drum and its mirror image, almost all illusory motion reversals occurred for only one image at a time. This result indicates that the motion reversal illusion cannot be explained by snapshots of the visual field. The same result is found when the two images are presented within one visual hemifield, further ruling out the possibility that discrete sampling of the visual field occurs separately in each hemisphere. The frequency distribution of illusory reversal durations approximates a gamma distribution, suggesting perceptual rivalry as a better explanation for illusory motion reversal. After adaptation of motion detectors coding for the correct direction, the activity of motion-sensitive neurons coding for motion in the reverse direction may intermittently become dominant and drive the perception of motion.
Selective visual attention for ugly and beautiful body parts in eating disorders.
Jansen, Anita; Nederkoorn, Chantal; Mulkens, Sandra
2005-02-01
Body image disturbance is characteristic of eating disorders, and current treatments use body exposure to reduce bad body feelings. There is however little known about the cognitive effects of body exposure. In the present study, eye movement registration (electroculography) as a direct index of selective visual attention was used while eating symptomatic and normal control participants were exposed to digitalized pictures of their own body and control bodies. The data showed a decreased focus on their own 'beautiful' body parts in the high symptomatic participants, whereas inspection of their own 'ugly' body parts was given priority. In the normal control group a self-serving cognitive bias was found: they focused more on their own 'beautiful' body parts and less on their own 'ugly' body parts. When viewing other bodies the pattern was reversed: high symptom participants allocated their attention to the beautiful parts of other bodies, whereas normal controls concentrated on the ugly parts of the other bodies. From the present findings the hypothesis follows that a change in the processing of information might be needed for body exposure to be successful.
Lateralization in Alpha-Band Oscillations Predicts the Locus and Spatial Distribution of Attention
Ikkai, Akiko; Dandekar, Sangita; Curtis, Clayton E.
2016-01-01
Attending to a task-relevant location changes how neural activity oscillates in the alpha band (8–13Hz) in posterior visual cortical areas. However, a clear understanding of the relationships between top-down attention, changes in alpha oscillations in visual cortex, and attention performance are still poorly understood. Here, we tested the degree to which the posterior alpha power tracked the locus of attention, the distribution of attention, and how well the topography of alpha could predict the locus of attention. We recorded magnetoencephalographic (MEG) data while subjects performed an attention demanding visual discrimination task that dissociated the direction of attention from the direction of a saccade to indicate choice. On some trials, an endogenous cue predicted the target’s location, while on others it contained no spatial information. When the target’s location was cued, alpha power decreased in sensors over occipital cortex contralateral to the attended visual field. When the cue did not predict the target’s location, alpha power again decreased in sensors over occipital cortex, but bilaterally, and increased in sensors over frontal cortex. Thus, the distribution and the topography of alpha reliably indicated the locus of covert attention. Together, these results suggest that alpha synchronization reflects changes in the excitability of populations of neurons whose receptive fields match the locus of attention. This is consistent with the hypothesis that alpha oscillations reflect the neural mechanisms by which top-down control of attention biases information processing and modulate the activity of neurons in visual cortex. PMID:27144717
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.
Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta
2018-01-01
The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
NASA Astrophysics Data System (ADS)
Vinci, Matteo; Lipizer, Marina; Giorgetti, Alessandra
2016-04-01
The European Marine Observation and Data Network (EMODnet) initiative has the following purposes: to assemble marine metadata, data and products, to make these fragmented resources more easily available to public and private users and to provide quality-assured, standardised and harmonised marine data. EMODnet Chemistry was launched by DG MARE in 2009 to support the Marine Strategy Framework Directive (MSFD) requirements for the assessment of eutrophication and contaminants, following INSPIRE Directive rules. The aim is twofold: the first task is to make available and reusable the big amount of fragmented and inaccessible data, hosted in the European research institutes and environmental agencies. The second objective is to develop visualization services useful for the tasks of the MSFD. The technical set-up is based on the principle of adopting and adapting the SeaDataNet infrastructure for ocean and marine data which are managed by National Oceanographic Data Centers and relies on a distributed network of data centers. Data centers contribute to data harvesting and enrichment with the relevant metadata. Data are processed into interoperable formats (using agreed standards ISO XML, ODV) with the use of common vocabularies and standardized quality control procedures .Data quality control is a key issue when merging heterogeneous data coming from different sources and a data validation loop has been agreed within EMODnet Chemistry community and is routinely performed. After data quality control done by the regional coordinators of the EU marine basins (Atlantic, Baltic, North, Mediterranean and Black Sea), validated regional datasets are used to develop data products useful for the requirements of the MSFD. EMODnet Chemistry provides interpolated seasonal maps of nutrients and services for the visualization of time series and profiles of several chemical parameters. All visualization services are developed following OGC standards as WMS and WPS. In order to test new strategies for data storage, reanalysis and to upgrade the infrastructure performances, EMODnet Chemistry has chosen the Cloud environment offered by Cineca (the Consortium of Italian Universities and research institutes) where both regional aggregated datasets and analysis and visualization services are hosted. Finally, beside the delivery of data and the visualization products, the results of the data harvesting provide a useful tool to identify data gaps where the future monitoring efforts should be focused.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Impairment in Emotional Modulation of Attention and Memory in Schizophrenia
Walsh-Messinger, Julie; Ramirez, Paul Michael; Wong, Philip; Antonius, Daniel; Aujero, Nicole; McMahon, Kevin; Opler, Lewis A.; Malaspina, Dolores
2014-01-01
Emotion plays a critical role in cognition and goal-directed behavior via complex interconnections between the emotional and motivational systems. It has been hypothesized that the impairment in goal-directed behavior widely noted in schizophrenia may result from defects in the interaction between the neural (ventral) emotional system and (rostral) cortical processes. The present study examined the impact of emotion on attention and memory in schizophrenia. Twenty-five individuals with schizophrenia related psychosis and 25 healthy control subjects were administered a computerized task in which they were asked to search for target images during a rapid serial visual presentation of pictures. Target stimuli were either positive, negative, or neutral images presented at either 200ms or 700ms lag. Additionally, a visual hedonics task was used to assess differences between the schizophrenia group and controls on ratings of valence and arousal from the picture stimuli. Compared to controls, individuals with schizophrenia detected fewer emotional images under both the 200ms and 700ms lag conditions. Multivariate analyses showed that the schizophrenia group also detected fewer positive images under the 700 lag condition and fewer negative images under the 200 lag condition. Individuals with schizophrenia reported higher pleasantness and unpleasantness ratings than controls in response to neutral stimuli, while controls reported higher arousal ratings for neutral and positive stimuli compared to the schizophrenia group. These results highlight dysfunction in the neural modulation of emotion, attention, and cortical processing in schizophrenia, adding to the growing but mixed body of literature on emotion processing in the disorder. PMID:24910446
Impairment in emotional modulation of attention and memory in schizophrenia.
Walsh-Messinger, Julie; Ramirez, Paul Michael; Wong, Philip; Antonius, Daniel; Aujero, Nicole; McMahon, Kevin; Opler, Lewis A; Malaspina, Dolores
2014-08-01
Emotion plays a critical role in cognition and goal-directed behavior via complex interconnections between the emotional and motivational systems. It has been hypothesized that the impairment in goal-directed behavior widely noted in schizophrenia may result from defects in the interaction between the neural (ventral) emotional system and (rostral) cortical processes. The present study examined the impact of emotion on attention and memory in schizophrenia. Twenty-five individuals with schizophrenia related psychosis and 25 healthy control subjects were administered a computerized task in which they were asked to search for target images during a Rapid Serial Visual Presentation of pictures. Target stimuli were either positive or negative, or neutral images presented at either 200ms or 700ms lag. Additionally, a visual hedonic task was used to assess differences between the schizophrenia group and controls on ratings of valence and arousal from the picture stimuli. Compared to controls, individuals with schizophrenia detected fewer emotional images under both the 200ms and 700ms lag conditions. Multivariate analyses showed that the schizophrenia group also detected fewer positive images under the 700ms lag condition and fewer negative images under the 200ms lag condition. Individuals with schizophrenia reported higher pleasantness and unpleasantness ratings than controls in response to neutral stimuli, while controls reported higher arousal ratings for neutral and positive stimuli compared to the schizophrenia group. These results highlight dysfunction in the neural modulation of emotion, attention, and cortical processing in schizophrenia, adding to the growing but mixed body of literature on emotion processing in the disorder. Published by Elsevier B.V.
Norton, Christine; Emmanuel, Anton; Stevens, Natasha; Scott, S Mark; Grossi, Ugo; Bannister, Sybil; Eldridge, Sandra; Mason, James M; Knowles, Charles H
2017-03-24
Constipation affects up to 20% of adults. Chronic constipation (CC) affects 1-2% of adults. Patient dissatisfaction is high; nearly 80% feel that laxative therapy is unsatisfactory and symptoms have significant impact on quality of life. There is uncertainty about the value of specialist investigations and whether equipment-intensive therapies using biofeedback confer additional benefit when compared with specialist conservative advice. A three-arm, parallel-group, multicentre randomised controlled trial. to determine whether standardised specialist-led habit training plus pelvic floor retraining using computerised biofeedback is more clinically effective than standardised specialist-led habit training alone; to determine whether outcomes are improved by stratification based on prior investigation of anorectal and colonic pathophysiology. Primary outcome measure is response to treatment, defined as a 0.4-point (10% of scale) or greater reduction in Patient Assessment of Constipation-Quality of Life (PAC-QOL) score 6 months after the end of treatment. Other outcomes up to 12 months include symptoms, quality of life, health economics, psychological health and qualitative experience. (1) habit training (HT) with computer-assisted direct visual biofeedback (HTBF) results in an average reduction in PAC-QOL score of 0.4 points at 6 months compared to HT alone in unselected adults with CC, (2) stratification to either HT or HTBF informed by pathophysiological investigation (INVEST) results in an average 0.4-point reduction in PAC-QOL score at 6 months compared with treatment not directed by investigations (No-INVEST). Inclusion: chronic constipation in adults (aged 18-70 years) defined by self-reported symptom duration of more than 6 months; failure of previous laxatives or prokinetics and diet and lifestyle modifications. Consenting participants (n = 394) will be randomised to one of three arms in an allocation ratio of 3:3:2: [1] habit training, [2] habit training and biofeedback or [3] investigation-led allocation to one of these arms. Analysis will be on an intention-to-treat basis. This trial has the potential to answer some of the major outstanding questions in the management of chronic constipation, including whether costly invasive tests are warranted and whether computer-assisted direct visual biofeedback confers additional benefit to well-managed specialist advice alone. International Standard Randomised Controlled Trial Number: ISRCTN11791740. Registered on 16 July 2015.
Control of a 2 DoF robot using a brain-machine interface.
Hortal, Enrique; Ubeda, Andrés; Iáñez, Eduardo; Azorín, José M
2014-09-01
In this paper, a non-invasive spontaneous Brain-Machine Interface (BMI) is used to control the movement of a planar robot. To that end, two mental tasks are used to manage the visual interface that controls the robot. The robot used is a PupArm, a force-controlled planar robot designed by the nBio research group at the Miguel Hernández University of Elche (Spain). Two control strategies are compared: hierarchical and directional control. The experimental test (performed by four users) consists of reaching four targets. The errors and time used during the performance of the tests are compared in both control strategies (hierarchical and directional control). The advantages and disadvantages of each method are shown after the analysis of the results. The hierarchical control allows an accurate approaching to the goals but it is slower than using the directional control which, on the contrary, is less precise. The results show both strategies are useful to control this planar robot. In the future, by adding an extra device like a gripper, this BMI could be used in assistive applications such as grasping daily objects in a realistic environment. In order to compare the behavior of the system taking into account the opinion of the users, a NASA Tasks Load Index (TLX) questionnaire is filled out after two sessions are completed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Flight directions of passerine migrants in daylight and darkness: A radar and direct visual study
NASA Technical Reports Server (NTRS)
Gauthreaux, S. A., Jr.
1972-01-01
The application of radar and visual techniques to determine the migratory habits of passerine birds during daylight and darkness is discussed. The effects of wind on the direction of migration are examined. Scatter diagrams of daytime and nocturnal migration track directions correlated with wind direction are presented. It is concluded that migratory birds will fly at altitudes where wind direction and migratory direction are nearly the same. The effects of cloud cover and solar obscuration are considered negligible.
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye.
Madan, Christopher R; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B; Sommer, Tobias
2017-01-01
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term 'visual complexity.' Visual complexity can be described as, "a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components." Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an 'arousal-complexity bias' to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli.
Representational Account of Memory: Insights from Aging and Synesthesia.
Pfeifer, Gaby; Ward, Jamie; Chan, Dennis; Sigala, Natasha
2016-12-01
The representational account of memory envisages perception and memory to be on a continuum rather than in discretely divided brain systems [Bussey, T. J., & Saksida, L. M. Memory, perception, and the ventral visual-perirhinal-hippocampal stream: Thinking outside of the boxes. Hippocampus, 17, 898-908, 2007]. We tested this account using a novel between-group design with young grapheme-color synesthetes, older adults, and young controls. We investigated how the disparate sensory-perceptual abilities between these groups translated into associative memory performance for visual stimuli that do not induce synesthesia. ROI analyses of the entire ventral visual stream showed that associative retrieval (a pair-associate retrieved in the absence of a visual stimulus) yielded enhanced activity in young and older adults' visual regions relative to synesthetes, whereas associative recognition (deciding whether a visual stimulus was the correct pair-associate) was characterized by enhanced activity in synesthetes' visual regions relative to older adults. Whole-brain analyses at associative retrieval revealed an effect of age in early visual cortex, with older adults showing enhanced activity relative to synesthetes and young adults. At associative recognition, the group effect was reversed: Synesthetes showed significantly enhanced activity relative to young and older adults in early visual regions. The inverted group effects observed between retrieval and recognition indicate that reduced sensitivity in visual cortex (as in aging) comes with increased activity during top-down retrieval and decreased activity during bottom-up recognition, whereas enhanced sensitivity (as in synesthesia) shows the opposite pattern. Our results provide novel evidence for the direct contribution of perceptual mechanisms to visual associative memory based on the examples of synesthesia and aging.
Visual Complexity and Affect: Ratings Reflect More Than Meets the Eye
Madan, Christopher R.; Bayer, Janine; Gamer, Matthias; Lonsdorf, Tina B.; Sommer, Tobias
2018-01-01
Pictorial stimuli can vary on many dimensions, several aspects of which are captured by the term ‘visual complexity.’ Visual complexity can be described as, “a picture of a few objects, colors, or structures would be less complex than a very colorful picture of many objects that is composed of several components.” Prior studies have reported a relationship between affect and visual complexity, where complex pictures are rated as more pleasant and arousing. However, a relationship in the opposite direction, an effect of affect on visual complexity, is also possible; emotional arousal and valence are known to influence selective attention and visual processing. In a series of experiments, we found that ratings of visual complexity correlated with affective ratings, and independently also with computational measures of visual complexity. These computational measures did not correlate with affect, suggesting that complexity ratings are separately related to distinct factors. We investigated the relationship between affect and ratings of visual complexity, finding an ‘arousal-complexity bias’ to be a robust phenomenon. Moreover, we found this bias could be attenuated when explicitly indicated but did not correlate with inter-individual difference measures of affective processing, and was largely unrelated to cognitive and eyetracking measures. Taken together, the arousal-complexity bias seems to be caused by a relationship between arousal and visual processing as it has been described for the greater vividness of arousing pictures. The described arousal-complexity bias is also of relevance from an experimental perspective because visual complexity is often considered a variable to control for when using pictorial stimuli. PMID:29403412
Synchronization trigger control system for flow visualization
NASA Technical Reports Server (NTRS)
Chun, K. S.
1987-01-01
The use of cinematography or holographic interferometry for dynamic flow visualization in an internal combustion engine requires a control device that globally synchronizes camera and light source timing at a predefined shaft encoder angle. The device is capable of 0.35 deg resolution for rotational speeds of up to 73 240 rpm. This was achieved by implementing the shaft encoder signal addressed look-up table (LUT) and appropriate latches. The developed digital signal processing technique achieves 25 nsec of high speed triggering angle detection by using direct parallel bit comparison of the shaft encoder digital code with a simulated angle reference code, instead of using angle value comparison which involves more complicated computation steps. In order to establish synchronization to an AC reference signal whose magnitude is variant with the rotating speed, a dynamic peak followup synchronization technique has been devised. This method scrutinizes the reference signal and provides the right timing within 40 nsec. Two application examples are described.
Deficits of organizational strategy and visual memory in obsessive-compulsive disorder.
Shin, M S; Park, S J; Kim, M S; Lee, Y H; Ha, T H; Kwon, J S
2004-10-01
This study was conducted to investigate the deficits of organizational strategy and visual memory in obsessive-compulsive disorder (OCD). Thirty OCD patients and 30 healthy controls aged 20-35 years participated. The Maudsley Obsessive-Compulsive Inventory, Beck Anxiety Inventory, Wechsler Adult Intelligence Scale, and Rey-Osterrieth Complex Figure (ROCF) test were administered to participants. The authors scored ROCF performances using the Boston Qualitative Scoring System. The OCD patients showed poorer planning ability and higher fragmentation than did healthy controls when copying the ROCF, and they showed even poorer performances in the immediate and delayed recall conditions. The authors found that the Organization score in the copy condition mediated the difference between the OCD group and the healthy group in immediate recall. The direct effect of diagnosis (OCD or healthy) on the immediate recall condition of the ROCF was also significant. This study indicates that people with OCD have poor memory function and organizational deficits.
Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout
NASA Technical Reports Server (NTRS)
Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.
1997-01-01
The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.
Wylie, Scott A.; Bashore, Theodore R.; Van Wouwe, Nelleke C.; Mason, Emily J.; John, Kevin D.; Neimat, Joseph S.; Ally, Brandon A.
2018-01-01
American football is played in a chaotic visual environment filled with relevant and distracting information. We investigated the hypothesis that collegiate football players show exceptional skill at shielding their response execution from the interfering effects of distraction (interference control). The performances of 280 football players from National Collegiate Athletic Association Division I football programs were compared to age-matched controls in a variant of the Eriksen flanker task (Eriksen and Eriksen, 1974). This task quantifies the magnitude of interference produced by visual distraction on split-second response execution. Overall, football athletes and age controls showed similar mean reaction times (RTs) and accuracy rates. However, football athletes were more proficient at shielding their response execution speed from the interfering effects of distraction (i.e., smaller flanker effect costs on RT). Offensive and defensive players showed smaller interference costs compared to controls, but defensive players showed the smallest costs. All defensive positions and one offensive position showed statistically smaller interference effects when compared directly to age controls. These data reveal a clear cognitive advantage among football athletes at executing motor responses in the face of distraction, the existence and magnitude of which vary by position. Individual differences in cognitive control may have important implications for both player selection and development to improve interference control capabilities during play. PMID:29479325
Exploring eye movements in patients with glaucoma when viewing a driving scene.
Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F
2010-03-16
Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive.
Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene
Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.
2010-01-01
Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. PMID:20300522
Saccadic Corollary Discharge Underlies Stable Visual Perception
Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.
2016-01-01
Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647
Control and prediction components of movement planning in stuttering vs. nonstuttering adults
Daliri, Ayoub; Prokopenko, Roman A.; Flanagan, J. Randall; Max, Ludo
2014-01-01
Purpose Stuttering individuals show speech and nonspeech sensorimotor deficiencies. To perform accurate movements, the sensorimotor system needs to generate appropriate control signals and correctly predict their sensory consequences. Using a reaching task, we examined the integrity of these control and prediction components, separately, for movements unrelated to the speech motor system. Method Nine stuttering and nine nonstuttering adults made fast reaching movements to visual targets while sliding an object under the index finger. To quantify control, we determined initial direction error and end-point error. To quantify prediction, we calculated the correlation between vertical and horizontal forces applied to the object—an index of how well vertical force (preventing slip) anticipated direction-dependent variations in horizontal force (moving the object). Results Directional and end-point error were significantly larger for the stuttering group. Both groups performed similarly in scaling vertical force with horizontal force. Conclusions The stuttering group's reduced reaching accuracy suggests limitations in generating control signals for voluntary movements, even for non-orofacial effectors. Typical scaling of vertical force with horizontal force suggests an intact ability to predict the consequences of planned control signals. Stuttering may be associated with generalized deficiencies in planning control signals rather than predicting the consequences of those signals. PMID:25203459
Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune
2015-01-01
When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895
Urusov, Alexandr E; Gubaidullina, Miliausha K; Petrakova, Alina V; Zherdev, Anatoly V; Dzantiev, Boris B
2017-12-06
A new kind of competitive immunochromatographic assay is presented. It is based on the use of a test strip loaded with (a) labeled specific antibodies, (b) a hapten-protein conjugate at the control zone, and (c) antibodies interacting with the specific antibodies in the analytical zone. In the case where a sample does not contain the target antigen (hapten), all labeled antibodies remain in the control zone because of the selected ratio of reactants. The analytical zone remains colorless because the labeled antibodies do not reach it. If an antigen is present in the sample, it interferes with the binding of the specific antibodies in the control zone and knocks them out. Some of these antibodies pass the control zone to form a colored line in the analytical zone. The intensity of the color is directly proportional to the amount of the target antigen in the sample. The assay has an attractive feature in that an appearance in coloration is more easily detected visually than a decoloration. Moreover, the onset of coloration is detectable at a lower concentration than a decoloration. The new detection scheme was applied to the determination of the mycotoxin deoxynivalenol. The visual limit of detection is 2 ng·mL -1 in corn extracts (35 ng per gram of sample). With the same reagents, this is lower by a factor of 60 than the established test strip. The assay takes only 15 min. This new kind of assay has wide potential applications for numerous low molecular weight analytes. Graphical abstract Competitive immunochromatography with direct analyte-signal dependence is proposed. It provides a 60-fold decrease of the detection limit for mycotoxin deoxynivalenol. The analyte-antibody-label complexes move along the immobilized antigen (control zone) and bind with anti-species antibodies (test zone).
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Verspui, Remko; Gray, John R
2009-10-01
Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.
Retinal Origin of Direction Selectivity in the Superior Colliculus
Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua
2017-01-01
Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394
Alexiades-Armenakas, Macrene R; Bernstein, Leonard J; Friedman, Paul M; Geronemus, Roy G
2004-08-01
To assess the safety and efficacy of the 308-nm excimer laser in pigment correction of hypopigmented scars and striae alba. Institutional review board-approved randomized controlled trial. Private research center. Volunteer sample of 31 adult subjects with hypopigmented scars or striae alba distributed on the face, torso, or extremities. Lesions were randomized to receive treatment or not, with site-matched normal control areas. Treatments were initiated with a minimal erythema dose minus 50 mJ/cm(2) to affected areas. Subsequent treatments were performed biweekly until 50% to 75% pigment correction, then every 2 weeks thereafter until a maximum of 10 treatments, 75% increase in colorimetric measurements, or 100% visual pigment correction. Pigment correction by visual and colorimetric assessments compared with untreated control lesions and site-matched normal skin before each treatment and at 1-, 2-, 4-, and 6-month follow-up intervals. Occurrence of erythema, blistering, dyspigmentation, or other adverse effects was monitored. The percentage pigment correction by both assessments increased in direct proportion to the number of treatments. The mean percentage pigment correction by visual assessment relative to control of 61% (95% confidence interval [CI], 55%-67%) for scars and 68% (95% CI, 62%-74%) for striae was achieved after 9 treatments. The mean percentage pigmentation by colorimetric measurements relative to control of 101% (95% CI, 99%-103%) for scars and 102% (95% CI, 99%-104%) for striae was achieved after 9 treatments. Both sets of values gradually declined toward baseline levels during the 6-month follow-up. No blistering or dyspigmentation occurred. Therapy with the 308-nm excimer laser is safe and effective in pigment correction of hypopigmented scars and striae alba. Mean final pigment correction rates relative to control sites of approximately 60% to 70% by visual assessment and 100% by colorimetric analysis were observed after 9 treatments administered biweekly. Maintenance treatment every 1 to 4 months is required to sustain the cosmetic benefit.
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Communication of direction by the honey bee.
Gould, J L; Henerey, M; MacLeod, M C
1970-08-07
In the presence of controls for site- and path-specific odors, observer and food-source scents, Nasanov gland and alarm odors, visual cues, wind, and general site taxis, recruited bees were able to locate the food source indicated by the dances of returning foragers in preference to a food source located at an equal distance in the opposite direction. This was true even when foragers were simultaneously dancing to indicate two different stations. Recruitment in the absence of dancing was very low, while in the absence of foraging it was virtually zero. Thus, under the experimental conditions used, the directional information contained in the dance appears to have been communicated from forager to recruit and subsequently used by the recruit.
How visualization layout relates to locus of control and other personality factors.
Ziemkiewicz, Caroline; Ottley, Alvitta; Crouser, R Jordan; Yauilla, Ashley Rye; Su, Sara L; Ribarsky, William; Chang, Remco
2013-07-01
Existing research suggests that individual personality differences are correlated with a user's speed and accuracy in solving problems with different types of complex visualization systems. We extend this research by isolating factors in personality traits as well as in the visualizations that could have contributed to the observed correlation. We focus on a personality trait known as "locus of control” (LOC), which represents a person's tendency to see themselves as controlled by or in control of external events. To isolate variables of the visualization design, we control extraneous factors such as color, interaction, and labeling. We conduct a user study with four visualizations that gradually shift from a list metaphor to a containment metaphor and compare the participants' speed, accuracy, and preference with their locus of control and other personality factors. Our findings demonstrate that there is indeed a correlation between the two: participants with an internal locus of control perform more poorly with visualizations that employ a containment metaphor, while those with an external locus of control perform well with such visualizations. These results provide evidence for the externalization theory of visualization. Finally, we propose applications of these findings to adaptive visual analytics and visualization evaluation.
Spatial updating in area LIP is independent of saccade direction.
Heiser, Laura M; Colby, Carol L
2006-05-01
We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.
Implicit and Explicit Representations of Hand Position in Tool Use
Rand, Miya K.; Heuer, Herbert
2013-01-01
Understanding the interactions of visual and proprioceptive information in tool use is important as it is the basis for learning of the tool's kinematic transformation and thus skilled performance. This study investigated how the CNS combines seen cursor positions and felt hand positions under a visuo-motor rotation paradigm. Young and older adult participants performed aiming movements on a digitizer while looking at rotated visual feedback on a monitor. After each movement, they judged either the proprioceptively sensed hand direction or the visually sensed cursor direction. We identified asymmetric mutual biases with a strong visual dominance. Furthermore, we found a number of differences between explicit and implicit judgments of hand directions. The explicit judgments had considerably larger variability than the implicit judgments. The bias toward the cursor direction for the explicit judgments was about twice as strong as for the implicit judgments. The individual biases of explicit and implicit judgments were uncorrelated. Biases of these judgments exhibited opposite sequential effects. Moreover, age-related changes were also different between these judgments. The judgment variability was decreased and the bias toward the cursor direction was increased with increasing age only for the explicit judgments. These results indicate distinct explicit and implicit neural representations of hand direction, similar to the notion of distinct visual systems. PMID:23894307
Acquisition and expression of memories of distance and direction in navigating wood ants.
Fernandes, A Sofia D; Philippides, Andrew; Collett, Tom S; Niven, Jeremy E
2015-11-01
Wood ants, like other central place foragers, rely on route memories to guide them to and from a reliable food source. They use visual memories of the surrounding scene and probably compass information to control their direction. Do they also remember the length of their route and do they link memories of direction and distance? To answer these questions, we trained wood ant (Formica rufa) foragers in a channel to perform either a single short foraging route or two foraging routes in opposite directions. By shifting the starting position of the route within the channel, but keeping the direction and distance fixed, we tried to ensure that the ants would rely upon vector memories rather than visual memories to decide when to stop. The homeward memories that the ants formed were revealed by placing fed or unfed ants directly into a channel and assessing the direction and distance that they walked without prior performance of the food-ward leg of the journey. This procedure prevented the distance and direction walked being affected by a home vector derived from path integration. Ants that were unfed walked in the feeder direction. Fed ants walked in the opposite direction for a distance related to the separation between start and feeder. Vector memories of a return route can thus be primed by the ants' feeding state and expressed even when the ants have not performed the food-ward route. Tests on ants that have acquired two routes indicate that memories of the direction and distance of the return routes are linked, suggesting that they may be encoded by a common neural population within the ant brain. © 2015. Published by The Company of Biologists Ltd.
Interaction between gaze and visual and proprioceptive position judgements.
Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P
2010-06-01
There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.
Comprehension of Navigation Directions
NASA Technical Reports Server (NTRS)
Schneider, Vivian I.; Healy, Alice F.
2000-01-01
In an experiment simulating communication between air traffic controllers and pilots, subjects were given navigation instructions varying in length telling them to move in a space represented by grids on a computer screen. The subjects followed the instructions by clicking on the grids in the locations specified. Half of the subjects read the instructions, and half heard them. Half of the subjects in each modality condition repeated back the instructions before following them,and half did not. Performance was worse for the visual than for the auditory modality on the longer messages. Repetition of the instructions generally depressed performance, especially with the longer messages, which required more output than did the shorter messages, and especially with the visual modality, in which phonological recoding from the visual input to the spoken output was necessary. These results are explained in terms of the degrading effects of output interference on memory for instructions.
Tracking the allocation of attention using human pupillary oscillations
Naber, Marnix; Alvarez, George A.; Nakayama, Ken
2013-01-01
The muscles that control the pupil are richly innervated by the autonomic nervous system. While there are central pathways that drive pupil dilations in relation to arousal, there is no anatomical evidence that cortical centers involved with visual selective attention innervate the pupil. In this study, we show that such connections must exist. Specifically, we demonstrate a novel Pupil Frequency Tagging (PFT) method, where oscillatory changes in stimulus brightness over time are mirrored by pupil constrictions and dilations. We find that the luminance–induced pupil oscillations are enhanced when covert attention is directed to the flicker stimulus and when targets are correctly detected in an attentional tracking task. These results suggest that the amplitudes of pupil responses closely follow the allocation of focal visual attention and the encoding of stimuli. PFT provides a new opportunity to study top–down visual attention itself as well as identifying the pathways and mechanisms that support this unexpected phenomenon. PMID:24368904
Visualizing and Steering Dissociative Frustrated Double Ionization of Hydrogen Molecules
NASA Astrophysics Data System (ADS)
Zhang, Wenbin; Yu, Zuqing; Gong, Xiaochun; Wang, Junping; Lu, Peifen; Li, Hui; Song, Qiying; Ji, Qinying; Lin, Kang; Ma, Junyang; Li, Hanxiao; Sun, Fenghao; Qiang, Junjie; Zeng, Heping; He, Feng; Wu, Jian
2017-12-01
We experimentally visualize the dissociative frustrated double ionization of hydrogen molecules by using few-cycle laser pulses in a pump-probe scheme, in which process the tunneling ionized electron is recaptured by one of the outgoing nuclei of the breaking molecule. Three internuclear distances are recognized to enhance the dissociative frustrated double ionization of molecules at different instants after the first ionization step. The recapture of the electron can be further steered to one of the outgoing nuclei as desired by using phase-controlled two-color laser pulses. Both the experimental measurements and numerical simulations suggest that the Rydberg atom is favored to emit to the direction of the maximum of the asymmetric optical field. Our results on the one hand intuitively visualize the dissociative frustrated double ionization of molecules, and on the other hand open the possibility to selectively excite the heavy fragment ejected from a molecule.
Auditory and visual cortex of primates: a comparison of two sensory systems
Rauschecker, Josef P.
2014-01-01
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
Analysis of Actin-Based Intracellular Trafficking in Pollen Tubes.
Jiang, Yuxiang; Zhang, Meng; Huang, Shanjin
2017-01-01
Underlying rapid and directional pollen tube growth is the active intracellular trafficking system that carries materials necessary for cell wall synthesis and membrane expansion to the expanding point of the pollen tube. The actin cytoskeleton has been shown to control various intracellular trafficking events in the pollen tube, but the underlying cellular and molecular mechanisms remain poorly understood. To better understand how the actin cytoskeleton is involved in the regulation of intracellular trafficking events, we need to establish assays to visualize and quantify the distribution and dynamics of organelles, vesicles, or secreted proteins. In this chapter, we introduce methods regarding the visualization and quantification of the distribution and dynamics of organelles or vesicles in pollen tubes.
Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K
2015-01-22
The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Smets, Karolien; Moors, Pieter; Reynvoet, Bert
2016-01-01
Performance in a non-symbolic comparison task in which participants are asked to indicate the larger numerosity of two dot arrays, is assumed to be supported by the Approximate Number System (ANS). This system allows participants to judge numerosity independently from other visual cues. Supporting this idea, previous studies indicated that numerosity can be processed when visual cues are controlled for. Consequently, distinct types of visual cue control are assumed to be interchangeable. However, a previous study showed that the type of visual cue control affected performance using a simultaneous presentation of the stimuli in numerosity comparison. In the current study, we explored whether the influence of the type of visual cue control on performance disappeared when sequentially presenting each stimulus in numerosity comparison. While the influence of the applied type of visual cue control was significantly more evident in the simultaneous condition, sequentially presenting the stimuli did not completely exclude the influence of distinct types of visual cue control. Altogether, these results indicate that the implicit assumption that it is possible to compare performances across studies with a differential visual cue control is unwarranted and that the influence of the type of visual cue control partly depends on the presentation format of the stimuli. PMID:26869967
Psychoanatomical substrates of Bálint's syndrome
Rizzo, M; Vecera, S
2002-01-01
Objectives: From a series of glimpses, we perceive a seamless and richly detailed visual world. Cerebral damage, however, can destroy this illusion. In the case of Bálint's syndrome, the visual world is perceived erratically, as a series of single objects. The goal of this review is to explore a range of psychological and anatomical explanations for this striking visual disorder and to propose new directions for interpreting the findings in Bálint's syndrome and related cerebral disorders of visual processing. Methods: Bálint's syndrome is reviewed in the light of current concepts and methodologies of vision research. Results: The syndrome affects visual perception (causing simultanagnosia/visual disorientation) and visual control of eye and hand movement (causing ocular apraxia and optic ataxia). Although it has been generally construed as a biparietal syndrome causing an inability to see more than one object at a time, other lesions and mechanisms are also possible. Key syndrome components are dissociable and comprise a range of disturbances that overlap the hemineglect syndrome. Inouye's observations in similar cases, beginning in 1900, antedated Bálint's initial report. Because Bálint's syndrome is not common and is difficult to assess with standard clinical tools, the literature is dominated by case reports and confounded by case selection bias, non-uniform application of operational definitions, inadequate study of basic vision, poor lesion localisation, and failure to distinguish between deficits in the acute and chronic phases of recovery. Conclusions: Studies of Bálint's syndrome have provided unique evidence on neural substrates for attention, perception, and visuomotor control. Future studies should address possible underlying psychoanatomical mechanisms at "bottom up" and "top down" levels, and should specifically consider visual working memory and attention (including object based attention) as well as systems for identification of object structure and depth from binocular stereopsis, kinetic depth, motion parallax, eye movement signals, and other cues. PMID:11796765
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Bo; State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Science, Beijing 100101; Xia Jing
Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unitmore » responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.« less
Zeitoun, Jack H.; Kim, Hyungtae
2017-01-01
Binocular mechanisms for visual processing are thought to enhance spatial acuity by combining matched input from the two eyes. Studies in the primary visual cortex of carnivores and primates have confirmed that eye-specific neuronal response properties are largely matched. In recent years, the mouse has emerged as a prominent model for binocular visual processing, yet little is known about the spatial frequency tuning of binocular responses in mouse visual cortex. Using calcium imaging in awake mice of both sexes, we show that the spatial frequency preference of cortical responses to the contralateral eye is ∼35% higher than responses to the ipsilateral eye. Furthermore, we find that neurons in binocular visual cortex that respond only to the contralateral eye are tuned to higher spatial frequencies. Binocular neurons that are well matched in spatial frequency preference are also matched in orientation preference. In contrast, we observe that binocularly mismatched cells are more mismatched in orientation tuning. Furthermore, we find that contralateral responses are more direction-selective than ipsilateral responses and are strongly biased to the cardinal directions. The contralateral bias of high spatial frequency tuning was found in both awake and anesthetized recordings. The distinct properties of contralateral cortical responses may reflect the functional segregation of direction-selective, high spatial frequency-preferring neurons in earlier stages of the central visual pathway. Moreover, these results suggest that the development of binocularity and visual acuity may engage distinct circuits in the mouse visual system. SIGNIFICANCE STATEMENT Seeing through two eyes is thought to improve visual acuity by enhancing sensitivity to fine edges. Using calcium imaging of cellular responses in awake mice, we find surprising asymmetries in the spatial processing of eye-specific visual input in binocular primary visual cortex. The contralateral visual pathway is tuned to higher spatial frequencies than the ipsilateral pathway. At the highest spatial frequencies, the contralateral pathway strongly prefers to respond to visual stimuli along the cardinal (horizontal and vertical) axes. These results suggest that monocular, and not binocular, mechanisms set the limit of spatial acuity in mice. Furthermore, they suggest that the development of visual acuity and binocularity in mice involves different circuits. PMID:28924011
Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; North, Chris
2012-10-14
With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less
Observing polymersome dynamics in controlled microscale flows
NASA Astrophysics Data System (ADS)
Kumar, Subhalakshmi; Shenoy, Anish; Schroeder, Charles
2015-03-01
Achieving an understanding of single particle rheology for large yet deformable particles with controlled membrane viscoelasticity is major challenge in soft materials. In this work, we directly visualize the dynamics of single polymersomes (~ 10 μm in size) in an extensional flow using optical microscopy. We generate polymer vesicular structures composed of polybutadiene-block-polyethylene oxide (PB-b-PEO) copolymers. Single polymersomes are confined near the stagnation point of a planar extensional flow using an automated microfluidic trap, thereby enabling the direct observation of polymersome dynamics under fluid flows with controlled strains and strain rates. In a series of experiments, we investigate the effect of varying elasticity in vesicular membranes on polymersome deformation, along with the impact of decreasing membrane fluidity upon increasing diblock copolymer molecular weight. Overall, we believe that this approach will enable precise characterization of the role of membrane properties on single particle rheology for deformable polymersomes.
Fast visual prediction and slow optimization of preferred walking speed.
O'Connor, Shawn M; Donelan, J Maxwell
2012-05-01
People prefer walking speeds that minimize energetic cost. This may be accomplished by directly sensing metabolic rate and adapting gait to minimize it, but only slowly due to the compounded effects of sensing delays and iterative convergence. Visual and other sensory information is available more rapidly and could help predict which gait changes reduce energetic cost, but only approximately because it relies on prior experience and an indirect means to achieve economy. We used virtual reality to manipulate visually presented speed while 10 healthy subjects freely walked on a self-paced treadmill to test whether the nervous system beneficially combines these two mechanisms. Rather than manipulating the speed of visual flow directly, we coupled it to the walking speed selected by the subject and then manipulated the ratio between these two speeds. We then quantified the dynamics of walking speed adjustments in response to perturbations of the visual speed. For step changes in visual speed, subjects responded with rapid speed adjustments (lasting <2 s) and in a direction opposite to the perturbation and consistent with returning the visually presented speed toward their preferred walking speed, when visual speed was suddenly twice (one-half) the walking speed, subjects decreased (increased) their speed. Subjects did not maintain the new speed but instead gradually returned toward the speed preferred before the perturbation (lasting >300 s). The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that seeks to minimize energetic cost.
Extended Wearing Trial of Trifield Lens Device for “Tunnel Vision”
Woods, Russell L.; Giorgi, Robert G.; Berson, Eliot L.; Peli, Eli
2009-01-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5 to 22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6 to 60, weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, 9 chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those 9 patients, at long-term follow-up (35 to 78 weeks), 3 reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9 to 38, degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed. PMID:20444130
Extended wearing trial of Trifield lens device for 'tunnel vision'.
Woods, Russell L; Giorgi, Robert G; Berson, Eliot L; Peli, Eli
2010-05-01
Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5-22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6-60 weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, nine chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those nine patients, at long-term follow-up (35-78 weeks), three reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9-38 degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For reported difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed.
An algorithm for automatic reduction of complex signal flow graphs
NASA Technical Reports Server (NTRS)
Young, K. R.; Hoberock, L. L.; Thompson, J. G.
1976-01-01
A computer algorithm is developed that provides efficient means to compute transmittances directly from a signal flow graph or a block diagram. Signal flow graphs are cast as directed graphs described by adjacency matrices. Nonsearch computation, designed for compilers without symbolic capability, is used to identify all arcs that are members of simple cycles for use with Mason's gain formula. The routine does not require the visual acumen of an interpreter to reduce the topology of the graph, and it is particularly useful for analyzing control systems described for computer analyses by means of interactive graphics.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1984-01-01
Previously experienced data collection problems were successfully resolved. A limited effort, directed at improved methods of display of TM Band 6 data, has concentrated on implementation of intensity hue and saturation displays using the Band 6 data to control hue. These displays tend to give the appearance of high resolution thermal data and make whole scene thermal interpretation easier by color coding thermal data in a manner that aids visual interpretation. More quantitative efforts were directed at utilizing the reflected bands to define land cover classes and then modifying the thermal displays using long wave optical properties associated with cover type.
Misperception of exocentric directions in auditory space
Arthur, Joeanna C.; Philbeck, John W.; Sargent, Jesse; Dopkins, Stephen
2008-01-01
Previous studies have demonstrated large errors (over 30°) in visually perceived exocentric directions (the direction between two objects that are both displaced from the observer’s location; e.g., Philbeck et al., in press). Here, we investigated whether a similar pattern occurs in auditory space. Blindfolded participants either attempted to aim a pointer at auditory targets (an exocentric task) or gave a verbal estimate of the egocentric target azimuth. Targets were located at 20° to 160° azimuth in the right hemispace. For comparison, we also collected pointing and verbal judgments for visual targets. We found that exocentric pointing responses exhibited sizeable undershooting errors, for both auditory and visual targets, that tended to become more strongly negative as azimuth increased (up to −19° for visual targets at 160°). Verbal estimates of the auditory and visual target azimuths, however, showed a dramatically different pattern, with relatively small overestimations of azimuths in the rear hemispace. At least some of the differences between verbal and pointing responses appear to be due to the frames of reference underlying the responses; when participants used the pointer to reproduce the egocentric target azimuth rather than the exocentric target direction relative to the pointer, the pattern of pointing errors more closely resembled that seen in verbal reports. These results show that there are similar distortions in perceiving exocentric directions in visual and auditory space. PMID:18555205
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Dasgupta, Aritra; Lee, Joon-Yong; Wilson, Ryan; Lafrance, Robert A; Cramer, Nick; Cook, Kristin; Payne, Samuel
2017-01-01
Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems.
de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named “Get Coins,” through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user. PMID:29849549
Gallivan, Jason P; Goodale, Melvyn A
2018-01-01
In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.
Low-level visual attention and its relation to joint attention in autism spectrum disorder.
Jaworski, Jessica L Bean; Eigsti, Inge-Marie
2017-04-01
Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.
Leite, Harlei Miguel de Arruda; de Carvalho, Sarah Negreiros; Costa, Thiago Bulhões da Silva; Attux, Romis; Hornung, Heiko Horst; Arantes, Dalton Soares
2018-01-01
This paper presents a systematic analysis of a game controlled by a Brain-Computer Interface (BCI) based on Steady-State Visually Evoked Potentials (SSVEP). The objective is to understand BCI systems from the Human-Computer Interface (HCI) point of view, by observing how the users interact with the game and evaluating how the interface elements influence the system performance. The interactions of 30 volunteers with our computer game, named "Get Coins," through a BCI based on SSVEP, have generated a database of brain signals and the corresponding responses to a questionnaire about various perceptual parameters, such as visual stimulation, acoustic feedback, background music, visual contrast, and visual fatigue. Each one of the volunteers played one match using the keyboard and four matches using the BCI, for comparison. In all matches using the BCI, the volunteers achieved the goals of the game. Eight of them achieved a perfect score in at least one of the four matches, showing the feasibility of the direct communication between the brain and the computer. Despite this successful experiment, adaptations and improvements should be implemented to make this innovative technology accessible to the end user.
Octopus vulgaris uses visual information to determine the location of its arm.
Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael
2011-03-22
Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.
Deficit in visual temporal integration in autism spectrum disorders.
Nakano, Tamami; Ota, Haruhisa; Kato, Nobumasa; Kitazawa, Shigeru
2010-04-07
Individuals with autism spectrum disorders (ASD) are superior in processing local features. Frith and Happe conceptualize this cognitive bias as 'weak central coherence', implying that a local enhancement derives from a weakness in integrating local elements into a coherent whole. The suggested deficit has been challenged, however, because individuals with ASD were not found to be inferior to normal controls in holistic perception. In these opposing studies, however, subjects were encouraged to ignore local features and attend to the whole. Therefore, no one has directly tested whether individuals with ASD are able to integrate local elements over time into a whole image. Here, we report a weakness of individuals with ASD in naming familiar objects moved behind a narrow slit, which was worsened by the absence of local salient features. The results indicate that individuals with ASD have a clear deficit in integrating local visual information over time into a global whole, providing direct evidence for the weak central coherence hypothesis.
Radicevic, Zoran; Jelicic Dobrijevic, Ljiljana; Sovilj, Mirjana; Barlov, Ivana
2009-06-01
Aim of the research was to examine similarities and differences between the periods of experiencing visually stimulated directed speech-language information and periods of undirected attention. The examined group comprised N = 64 children, aged 4-5, with different speech-language disorders (developmental dysphasia, hyperactive syndrome with attention disorder, children with borderline intellectual abilities, autistic complex). Theta EEG was registered in children in the period of watching and describing the picture ("task"), and in the period of undirected attention ("passive period"). The children were recorded in standard EEG conditions, at 19 points of EEG registration and in longitudinal bipolar montage. Results in the observed age-operative theta rhythm indicated significant similarities and differences in the prevalence of spatial engagement of certain regions between the two hemispheres at the input and output of processing, which opens the possibility for more detailed analysis of conscious control of speech-language processing and its disorders.
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-01
Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.
Cacciamani, Laura; Likova, Lora T.
2017-01-01
The perirhinal cortex (PRC) is a medial temporal lobe structure that has been implicated in not only visual memory in the sighted, but also tactile memory in the blind (Cacciamani & Likova, 2016). It has been proposed that, in the blind, the PRC may contribute to modulation of tactile memory responses that emerge in low-level “visual” area V1 as a result of training-induced cortical reorganization (Likova, 2012; 2015). While some studies in the sighted have indicated that the PRC is indeed structurally and functionally connected to the visual cortex (Clavagnier et al., 2004; Peterson et al., 2012), the PRC’s direct modulation of V1 is unknown—particularly in those who lack the visual input that typically stimulates this region. In the present study, we tested Likova’s PRC modulation hypothesis; specifically, we used fMRI to assess the PRC’s Granger causal influence on V1 activation in the blind during a tactile memory task. To do so, we trained congenital and acquired blind participants on a unique memory-guided drawing technique previously shown to result in V1 reorganization towards tactile memory representations (Likova, 2012). The tasks (20s each) included: tactile exploration of raised line drawings of faces and objects, tactile memory retrieval via drawing, and a scribble motor/memory control. FMRI before and after a week of the Cognitive-Kinesthetic training on these tasks revealed a significant increase in PRC-to-V1 Granger causality from pre- to post-training during the memory drawing task, but not during the motor/memory control. This increase in causal connectivity indicates that the training strengthened the top-down modulation of visual cortex from the PRC. This is the first study to demonstrate enhanced directed functional connectivity from the PRC to the visual cortex in the blind, implicating the PRC as a potential source of the reorganization towards tactile representations that occurs in V1 in the blind brain (Likova, 2012). PMID:28347878
Marino, Alexandria C.; Mazer, James A.
2016-01-01
During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820
High performance visual display for HENP detectors
NASA Astrophysics Data System (ADS)
McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel
2001-08-01
A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.
How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling
Veale, Richard; Hafed, Ziad M.
2017-01-01
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044023
Modulation of motor control in saccadic behaviors by TMS over the posterior parietal cortex.
Liang, Wei-Kuang; Juan, Chi-Hung
2012-08-01
The right posterior parietal cortex (rPPC) has been found to be critical in shaping visual selection and distractor-induced saccade curvature in the context of predictive as well as nonpredictive visual cues by means of transcranial magnetic stimulation (TMS) interference. However, the dynamic details of how distractor-induced saccade curvatures are affected by rPPC TMS have not yet been investigated. This study aimed to elucidate the key dynamic properties that cause saccades to curve away from distractors with different degrees of curvature in various TMS and target predictability conditions. Stochastic optimal feedback control theory was used to model the dynamics of the TMS saccade data. This allowed estimation of torques, which was used to identify the critical dynamic mechanisms producing saccade curvature. The critical mechanisms of distractor-induced saccade curvatures were found to be the motor commands and torques in the transverse direction. When an unpredictable saccade target occurred with rPPC TMS, there was an initial period of greater distractor-induced torque toward the side opposite the distractor in the transverse direction, immediately followed by a relatively long period of recovery torque that brought the deviated trace back toward the target. The results imply that the mechanisms of distractor-induced saccade curvature may be comprised of two mechanisms: the first causing the initial deviation and the second bringing the deviated trace back toward the target. The pattern of the initial torque in the transverse direction revealed the former mechanism. Conversely, the later mechanism could be well explained as a consequence of the control policy in this model. To summarize, rPPC TMS increased the initial torque away from the distractor as well as the recovery torque toward the target.
Ultra-fast ipsilateral DPOAE adaptation not modulated by attention?
NASA Astrophysics Data System (ADS)
Dalhoff, Ernst; Zelle, Dennis; Gummer, Anthony W.
2018-05-01
Efferent stimulation of outer hair cells is supposed to attenuate cochlear amplification of sound waves and is accompanied by reduced DPOAE amplitudes. Recently, a method using two subsequent f2 pulses during presentation of a longer f1 pulse was introduced to measure fast ipsilateral adaptation effects on separated DPOAE components. Compensating primary-tone onsets for their latencies at the f2-tonotopic place, the average adaptation measured in four normal-hearing subjects was 5.0 dB with a time constant below 5 ms. In the present study, two experiments were performed to determine the origin of this ultra-fast ipsilateral adaptation effect. The first experiment measured ultra-fast ipsilateral adaptation using a two-pulse paradigm at three frequencies in the four subjects, while controlling for visual attention of the subjects. The other experiment also controlled for visual attention, but utilized a sequence of f2 short pulses in the presence of a continuous f1 tone to sample ipsilateral adaptation effects with longer time constants in eight subjects. In the first experiment, no significant change in the ultra-fast adaptation between non-directed attention and visual attention could be detected. In contrast, the second experiment revealed significant changes in the magnitude of the slower ipsilateral adaptation in the visual-attention condition. In conclusion, the lack of an attentional influence indicates that the ultra-fast ipsilateral DPOAE adaptation is not solely mediated by the medial olivocochlear reflex.
Doerschner, K.; Boyaci, H.; Maloney, L. T.
2007-01-01
We investigated limits on the human visual system’s ability to discount directional variation in complex lights field when estimating Lambertian surface color. Directional variation in the light field was represented in the frequency domain using spherical harmonics. The bidirectional reflectance distribution function of a Lambertian surface acts as a low-pass filter on directional variation in the light field. Consequently, the visual system needs to discount only the low-pass component of the incident light corresponding to the first nine terms of a spherical harmonics expansion (Basri & Jacobs, 2001; Ramamoorthi & Hanrahan, 2001) to accurately estimate surface color. We test experimentally whether the visual system discounts directional variation in the light field up to this physical limit. Our results are consistent with the claim that the visual system can compensate for all of the complexity in the light field that affects the appearance of Lambertian surfaces. PMID:18053846
Ravens, Corvus corax, follow gaze direction of humans around obstacles.
Bugnyar, Thomas; Stöwe, Mareike; Heinrich, Bernd
2004-01-01
The ability to follow gaze (i.e. head and eye direction) has recently been shown for social mammals, particularly primates. In most studies, individuals could use gaze direction as a behavioural cue without understanding that the view of others may be different from their own. Here, we show that hand-raised ravens not only visually co-orient with the look-ups of a human experimenter but also reposition themselves to follow the experimenter's gaze around a visual barrier. Birds were capable of visual co-orientation already as fledglings but consistently tracked gaze direction behind obstacles not before six months of age. These results raise the possibility that sub-adult and adult ravens can project a line of sight for the other person into the distance. To what extent ravens may attribute mental significance to the visual behaviour of others is discussed. PMID:15306330
Mendoza-Halliday, Diego; Martinez-Trujillo, Julio C.
2017-01-01
The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features. PMID:28569756
Lateral Spread of Orientation Selectivity in V1 is Controlled by Intracortical Cooperativity
Chavane, Frédéric; Sharon, Dahlia; Jancke, Dirk; Marre, Olivier; Frégnac, Yves; Grinvald, Amiram
2011-01-01
Neurons in the primary visual cortex receive subliminal information originating from the periphery of their receptive fields (RF) through a variety of cortical connections. In the cat primary visual cortex, long-range horizontal axons have been reported to preferentially bind to distant columns of similar orientation preferences, whereas feedback connections from higher visual areas provide a more diverse functional input. To understand the role of these lateral interactions, it is crucial to characterize their effective functional connectivity and tuning properties. However, the overall functional impact of cortical lateral connections, whatever their anatomical origin, is unknown since it has never been directly characterized. Using direct measurements of postsynaptic integration in cat areas 17 and 18, we performed multi-scale assessments of the functional impact of visually driven lateral networks. Voltage-sensitive dye imaging showed that local oriented stimuli evoke an orientation-selective activity that remains confined to the cortical feedforward imprint of the stimulus. Beyond a distance of one hypercolumn, the lateral spread of cortical activity gradually lost its orientation preference approximated as an exponential with a space constant of about 1 mm. Intracellular recordings showed that this loss of orientation selectivity arises from the diversity of converging synaptic input patterns originating from outside the classical RF. In contrast, when the stimulus size was increased, we observed orientation-selective spread of activation beyond the feedforward imprint. We conclude that stimulus-induced cooperativity enhances the long-range orientation-selective spread. PMID:21629708
Sexual Orientation-Related Differences in Virtual Spatial Navigation and Spatial Search Strategies.
Rahman, Qazi; Sharp, Jonathan; McVeigh, Meadhbh; Ho, Man-Ling
2017-07-01
Spatial abilities are generally hypothesized to differ between men and women, and people with different sexual orientations. According to the cross-sex shift hypothesis, gay men are hypothesized to perform in the direction of heterosexual women and lesbian women in the direction of heterosexual men on cognitive tests. This study investigated sexual orientation differences in spatial navigation and strategy during a virtual Morris water maze task (VMWM). Forty-four heterosexual men, 43 heterosexual women, 39 gay men, and 34 lesbian/bisexual women (aged 18-54 years) navigated a desktop VMWM and completed measures of intelligence, handedness, and childhood gender nonconformity (CGN). We quantified spatial learning (hidden platform trials), probe trial performance, and cued navigation (visible platform trials). Spatial strategies during hidden and probe trials were classified into visual scanning, landmark use, thigmotaxis/circling, and enfilading. In general, heterosexual men scored better than women and gay men on some spatial learning and probe trial measures and used more visual scan strategies. However, some differences disappeared after controlling for age and estimated IQ (e.g., in visual scanning heterosexual men differed from women but not gay men). Heterosexual women did not differ from lesbian/bisexual women. For both sexes, visual scanning predicted probe trial performance. More feminine CGN scores were associated with lower performance among men and greater performance among women on specific spatial learning or probe trial measures. These results provide mixed evidence for the cross-sex shift hypothesis of sexual orientation-related differences in spatial cognition.
Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki
2008-01-01
The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.
Choice reaction time to visual motion during prolonged rotary motion in airline pilots
NASA Technical Reports Server (NTRS)
Stewart, J. D.; Clark, B.
1975-01-01
Thirteen airline pilots were studied to determine the effect of preceding rotary accelerations on the choice reaction time to the horizontal acceleration of a vertical line on a cathode-ray tube. On each trial, one of three levels of rotary and visual acceleration was presented with the rotary stimulus preceding the visual by one of seven periods. The two accelerations were always equal and were presented in the same or opposite directions. The reaction time was found to increase with increases in the time the rotary acceleration preceded the visual acceleration, and to decrease with increased levels of visual and rotary acceleration. The reaction time was found to be shorter when the accelerations were in the same direction than when they were in opposite directions. These results suggest that these findings are a special case of a general effect that the authors have termed 'gyrovisual modulation'.
Neutrophil to Lymphocyte Ratio in Patients with Nonarteritic Anterior Ischemic Optic Neuropathy
Yigit, Musa; Tok, Levent; Tok, Ozlem
2017-01-01
Purpose To evaluate the neutrophil to lymphocyte ratio (NLR) in patients with nonarteritic anterior ischemic optic neuropathy (NAION). Methods We investigated 112 subjects comprising 56 patients with NAION and 56 healthy controls at Süleyman Demirel University. Complete blood count, demographic, and clinic data from NAION patients were evaluated in this study. The NLR was calculated in all individuals and compared between the patient and control groups. Cut-off values were also determined. Then, the relationship between NLR and visual outcomes was investigated. Results The cut-off value for NLR was 1.64. NLR values were significantly higher in NAION patients than in healthy subjects (p < 0.001) and were directly correlated with erythrocyte sedimentation rate levels (r = 0.263, p = 0.006). Also, the NLR value was associated with visual outcomes. Receiver operator characteristic curve analysis revealed a 0.63 area under the curve (confidence interval, 53.7% to 74.1%), 85% sensitivity and 41% specificity at the cut-off NLR value. Conclusions The NLR may be a biomarker with good sensitivity that is quick, cost effective and easily detected in serum. It can be used in clinical practice to predict a NAION patient's prognosis in terms of visual outcomes. PMID:28367045
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
Effect of organizational strategy on visual memory in patients with schizophrenia.
Kim, Myung-Sun; Namgoong, Yoon; Youn, Tak
2008-08-01
The aim of the present study was to examine how copy organization mediated immediate recall among patients with schizophrenia using the Rey-Osterrieth Complex Figure Test (ROCF). The Boston Qualitative Scoring System (BQSS) was applied for qualitative and quantitative analyses of ROCF performances. Subjects included 20 patients with schizophrenia and 20 age- and gender-matched healthy controls. During the copy condition, the schizophrenia group and the control group differed in fragmentation; during the immediate recall condition, the two groups differed in configural presence and planning; and during the delayed recall condition, they differed in several qualitative measurements, including configural presence, cluster presence/placement, detail presence/placement, fragmentation, planning, and neatness. The two groups also differed in several quantitative measurements, including immediate presence and accuracy, immediate retention, delayed retention, and organization. Although organizational strategies used during the copy condition mediated the difference between the two groups during the immediate recall condition, group also had a significant direct effect on immediate recall. Schizophrenia patients are deficient in visual memory, and a piecemeal approach to the figure and organizational deficit seem to be related to the visual memory deficit. But schizophrenia patients also appeared to have some memory problems, including retention and/or retrieval deficits.
Influence of Visual Prism Adaptation on Auditory Space Representation.
Pochopien, Klaudia; Fahle, Manfred
2017-01-01
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one's hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
New Technique of High-Performance Torque Control Developed for Induction Machines
NASA Technical Reports Server (NTRS)
Kenny, Barbara H.
2003-01-01
Two forms of high-performance torque control for motor drives have been described in the literature: field orientation control and direct torque control. Field orientation control has been the method of choice for previous NASA electromechanical actuator research efforts with induction motors. Direct torque control has the potential to offer some advantages over field orientation, including ease of implementation and faster response. However, the most common form of direct torque control is not suitable for the highspeed, low-stator-flux linkage induction machines designed for electromechanical actuators with the presently available sample rates of digital control systems (higher sample rates are required). In addition, this form of direct torque control is not suitable for the addition of a high-frequency carrier signal necessary for the "self-sensing" (sensorless) position estimation technique. This technique enables low- and zero-speed position sensorless operation of the machine. Sensorless operation is desirable to reduce the number of necessary feedback signals and transducers, thus improving the reliability and reducing the mass and volume of the system. This research was directed at developing an alternative form of direct torque control known as a "deadbeat," or inverse model, solution. This form uses pulse-width modulation of the voltage applied to the machine, thus reducing the necessary sample and switching frequency for the high-speed NASA motor. In addition, the structure of the deadbeat form allows the addition of the high-frequency carrier signal so that low- and zero-speed sensorless operation is possible. The new deadbeat solution is based on using the stator and rotor flux as state variables. This choice of state variables leads to a simple graphical representation of the solution as the intersection of a constant torque line with a constant stator flux circle. Previous solutions have been expressed only in complex mathematical terms without a method to clearly visualize the solution. The graphical technique allows a more insightful understanding of the operation of the machine under various conditions.
Effects of Visual Information on Wind-Evoked Escape Behavior of the Cricket, Gryllus bimaculatus.
Kanou, Masamichi; Matsuyama, Akane; Takuwa, Hiroyuki
2014-09-01
We investigated the effects of visual information on wind-evoked escape behavior in the cricket, Gryllus bimaculatus. Most agitated crickets were found to retreat into a shelter made of cardboard installed in the test arena within a short time. As this behavior was thought to be a type of escape, we confirmed how a visual image of a shelter affected wind-evoked escape behavior. Irrespective of the brightness of the visual background (black or white) or the absence or presence of a shelter, escape jumps were oriented almost 180° opposite to the source of the air puff stimulus. Therefore, the direction of wind-evoked escape depends solely depended on the direction of the stimulus air puff. In contrast, the turning direction of the crickets during the escape was affected by the position of the visual image of the shelter. During the wind-evoked escape jump, most crickets turned in the direction in which a shelter was presented. This behavioral nature is presumably necessary for crickets to retreat into a shelter within a short time after their escape jump.
Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.
Wiemers, Michael; Fischer, Martin H
2016-01-01
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834
Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.
Kinematic control of robot with degenerate wrist
NASA Technical Reports Server (NTRS)
Barker, L. K.; Moore, M. C.
1984-01-01
Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.
Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L
2016-08-02
Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.
In-vivo immunofluorescence confocal microscopy of herpes simplex virus type 1 keratitis
NASA Astrophysics Data System (ADS)
Kaufman, Stephen C.; Laird, Jeffery A.; Beuerman, Roger W.
1996-05-01
The white-light confocal microscope offers an in vivo, cellular-level resolution view of the cornea. This instrument has proven to be a valuable research and diagnostic tool for the study of infectious keratitis. In this study, we investigate the direct visualization of herpes simplex virus type 1 (HSV-1)-infected corneal epithelium, with in vivo confocal microscopy, using HSV-1 immunofluorescent antibodies. New Zealand white rabbits were infected with McKrae strain of HSV-1 in one eye; the other eye of each rabbit was used as an uninfected control. Four days later, the rabbits were anesthetized and a cellulose sponge was applied to each cornea, and a drop of direct HSV fluorescein-tagged antibody was placed on each sponge every 3 to 5 minutes for 1 hour. Fluorescence confocal microscopy was then performed. The HSV-infected corneas showed broad regions of hyperfluorescent epithelial cells. The uninfected corneas revealed no background fluorescence. Thus, using the confocal microscope with a fluorescent cube, we were able to visualize HSV-infected corneal epithelial cells tagged with a direct fluorescent antibody. This process may prove to be a useful clinical tool for the in vivo diagnosis of HSV keratitis.
Interfacial instability of wormlike micellar solutions sheared in a Taylor-Couette cell
NASA Astrophysics Data System (ADS)
Mohammadigoushki, Hadi; Muller, Susan J.
2014-10-01
We report experiments on wormlike micellar solutions sheared in a custom-made Taylor-Couette (TC) cell. The computer controlled TC cell allows us to rotate both cylinders independently. Wormlike micellar solutions containing water, CTAB, and NaNo3 with different compositions are highly elastic and exhibit shear banding within a range of shear rate. We visualized the flow field in the θ-z as well as r-z planes, using multiple cameras. When subject to low shear rates, the flow is stable and azimuthal, but becomes unstable above a certain threshold shear rate. This shear rate coincides with the onset of shear banding. Visualizing the θ-z plane shows that this instability is characterized by stationary bands equally spaced in the z direction. Increasing the shear rate results to larger wave lengths. Above a critical shear rate, experiments reveal a chaotic behavior reminiscent of elastic turbulence. We also studied the effect of ramp speed on the onset of instability and report an acceleration below which the critical Weissenberg number for onset of instability is unaffected. Moreover, visualizations in the r-z direction reveals that the interface between the two bands undulates. The shear band evolves towards the outer cylinder upon increasing the shear rate, regardless of which cylinder is rotating.
Spiegel, Daniel P.; Hansen, Bruce C.; Byblow, Winston D.; Thompson, Benjamin
2012-01-01
Transcranial direct current stimulation (tDCS) is a safe, non-invasive technique for transiently modulating the balance of excitation and inhibition within the human brain. It has been reported that anodal tDCS can reduce both GABA mediated inhibition and GABA concentration within the human motor cortex. As GABA mediated inhibition is thought to be a key modulator of plasticity within the adult brain, these findings have broad implications for the future use of tDCS. It is important, therefore, to establish whether tDCS can exert similar effects within non-motor brain areas. The aim of this study was to assess whether anodal tDCS could reduce inhibitory interactions within the human visual cortex. Psychophysical measures of surround suppression were used as an index of inhibition within V1. Overlay suppression, which is thought to originate within the lateral geniculate nucleus (LGN), was also measured as a control. Anodal stimulation of the occipital poles significantly reduced psychophysical surround suppression, but had no effect on overlay suppression. This effect was specific to anodal stimulation as cathodal stimulation had no effect on either measure. These psychophysical results provide the first evidence for tDCS-induced reductions of intracortical inhibition within the human visual cortex. PMID:22563485
Visual attention for a desktop virtual environment with ambient scent
Toet, Alexander; van Schaik, Martin G.
2013-01-01
In the current study participants explored a desktop virtual environment (VE) representing a suburban neighborhood with signs of public disorder (neglect, vandalism, and crime), while being exposed to either room air (control group), or subliminal levels of tar (unpleasant; typically associated with burned or waste material) or freshly cut grass (pleasant; typically associated with natural or fresh material) ambient odor. They reported all signs of disorder they noticed during their walk together with their associated emotional response. Based on recent evidence that odors reflexively direct visual attention to (either semantically or affectively) congruent visual objects, we hypothesized that participants would notice more signs of disorder in the presence of ambient tar odor (since this odor may bias attention to unpleasant and negative features), and less signs of disorder in the presence of ambient grass odor (since this odor may bias visual attention toward the vegetation in the environment and away from the signs of disorder). Contrary to our expectations the results provide no indication that the presence of an ambient odor affected the participants’ visual attention for signs of disorder or their emotional response. However, the paradigm used in present study does not allow us to draw any conclusions in this respect. We conclude that a closer affective, semantic, or spatiotemporal link between the contents of a desktop VE and ambient scents may be required to effectively establish diagnostic associations that guide a user’s attention. In the absence of these direct links, ambient scent may be more diagnostic for the physical environment of the observer as a whole than for the particular items in that environment (or, in this case, items represented in the VE). PMID:24324453
Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C
2018-05-09
Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.
Solnik, Stanislaw; Qiao, Mu; Latash, Mark L.
2017-01-01
This study tested two hypotheses on the nature of unintentional force drifts elicited by removing visual feedback during accurate force production tasks. The role of working memory (memory hypothesis) was explored in tasks with continuous force production, intermittent force production, and rest intervals over the same time interval. The assumption of unintentional drifts in referent coordinate for the fingertips was tested using manipulations of visual feedback: Young healthy subjects performed accurate steady-state force production tasks by pressing with the two index fingers on individual force sensors with visual feedback on the total force, sharing ratio, both, or none. Predictions based on the memory hypothesis have been falsified. In particular, we observed consistent force drifts to lower force values during continuous force production trials only. No force drift or drifts to higher forces were observed during intermittent force production trials and following rest intervals. The hypotheses based on the idea of drifts in referent finger coordinates have been confirmed. In particular, we observed superposition of two drift processes: A drift of total force to lower magnitudes and a drift of the sharing ratio to 50:50. When visual feedback on total force only was provided, the two finger forces showed drifts in opposite directions. We interpret the findings as evidence for the control of motor actions with changes in referent coordinates for participating effectors. Unintentional drifts in performance are viewed as natural relaxation processes in the involved systems; their typical time reflects stability in the direction of the drift. The magnitude of the drift was higher in the right (dominant) hand, which is consistent with the dynamic dominance hypothesis. PMID:28168396
Visual short-term memory load reduces retinotopic cortex response to contrast.
Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli
2012-11-01
Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.
Neuroanatomical and Cognitive Mediators of Age-Related Differences in Episodic Memory
Head, Denise; Rodrigue, Karen M.; Kennedy, Kristen M.; Raz, Naftali
2009-01-01
Aging is associated with declines in episodic memory. In this study, the authors used a path analysis framework to explore the mediating role of differences in brain structure, executive functions, and processing speed in age-related differences in episodic memory. Measures of regional brain volume (prefrontal gray and white matter, caudate, hippocampus, visual cortex), executive functions (working memory, inhibitory control, task switching, temporal processing), processing speed, and episodic memory were obtained in a sample of young and older adults. As expected, age was linked to reduction in regional brain volumes and cognitive performance. Moreover, neural and cognitive factors completely mediated age differences in episodic memory. Whereas hippocampal shrinkage directly affected episodic memory, prefrontal volumetric reductions influenced episodic memory via limitations in working memory and inhibitory control. Age-related slowing predicted reduced efficiency in temporal processing, working memory, and inhibitory control. Lastly, poorer temporal processing directly affected episodic memory. No direct effects of age on episodic memory remained once these factors were taken into account. These analyses highlight the value of a multivariate approach with the understanding of complex relationships in cognitive and brain aging. PMID:18590361
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
How Ants Use Vision When Homing Backward.
Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine
2017-02-06
Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Carpenter-Smith, Theodore R.; Futamura, Robert G.; Parker, Donald E.
1995-01-01
The present study focused on the development of a procedure to assess perceived self-motion induced by visual surround motion - vection. Using an apparatus that permitted independent control of visual and inertial stimuli, prone observers were translated along their head x-axis (fore/aft). The observers' task was to report the direction of self-motion during passive forward and backward translations of their bodies coupled with exposure to various visual surround conditions. The proportion of 'forward' responses was used to calculate each observer's point of subjective equality (PSE) for each surround condition. The results showed that the moving visual stimulus produced a significant shift in the PSE when data from the moving surround condition were compared with the stationary surround and no-vision condition. Further, the results indicated that vection increased monotonically with surround velocities between 4 and 40/s. It was concluded that linear vection can be measured in terms of changes in the amplitude of whole-body inertial acceleration required to elicit equivalent numbers of 'forward' and 'backward' self-motion reports.
The trait of sensory processing sensitivity and neural responses to changes in visual scenes
Xu, Xiaomeng; Aron, Arthur; Aron, Elaine; Cao, Guikang; Feng, Tingyong; Weng, Xuchu
2011-01-01
This exploratory study examined the extent to which individual differences in sensory processing sensitivity (SPS), a temperament/personality trait characterized by social, emotional and physical sensitivity, are associated with neural response in visual areas in response to subtle changes in visual scenes. Sixteen participants completed the Highly Sensitive Person questionnaire, a standard measure of SPS. Subsequently, they were tested on a change detection task while undergoing functional magnetic resonance imaging (fMRI). SPS was associated with significantly greater activation in brain areas involved in high-order visual processing (i.e. right claustrum, left occipitotemporal, bilateral temporal and medial and posterior parietal regions) as well as in the right cerebellum, when detecting minor (vs major) changes in stimuli. These findings remained strong and significant after controlling for neuroticism and introversion, traits that are often correlated with SPS. These results provide the first evidence of neural differences associated with SPS, the first direct support for the sensory aspect of this trait that has been studied primarily for its social and affective implications, and preliminary evidence for heightened sensory processing in individuals high in SPS. PMID:20203139
Experimental parametric study of jet vortex generators for flow separation control
NASA Technical Reports Server (NTRS)
Selby, Gregory
1991-01-01
A parametric wind-tunnel study was performed with jet vortex generators to determine their effectiveness in controlling flow separation associated with low-speed turbulence flow over a two-dimensional rearward-facing ramp. Results indicate that flow-separation control can be accomplished, with the level of control achieved being a function of jet speed, jet orientation (with respect to the free-stream direction), and orifice pattern (double row of jets vs. single row). Compared to slot blowing, jet vortex generators can provide an equivalent level of flow control over a larger spanwise region (for constant jet flow area and speed). Dye flow visualization tests in a water tunnel indicated that the most effective jet vortex generator configurations produced streamwise co-rotating vortices.
Visual Imagery without Visual Perception?
ERIC Educational Resources Information Center
Bertolo, Helder
2005-01-01
The question regarding visual imagery and visual perception remain an open issue. Many studies have tried to understand if the two processes share the same mechanisms or if they are independent, using different neural substrates. Most research has been directed towards the need of activation of primary visual areas during imagery. Here we review…
2016-09-01
is a Windows Presentation Foundation (WPF) control developed using the .NET framework in Microsoft Visual Studio. As a WPF control, it can be used in...any WPF application as a graphical visual element. The purpose of the control is to visually display time-related events as vertical lines on a...available on the control. 15. SUBJECT TERMS Windows Presentation Foundation, WPF, control, C#, .NET framework, Microsoft Visual Studio 16. SECURITY
Visuomotor Transformation in the Fly Gaze Stabilization System
Huston, Stephen J; Krapp, Holger G
2008-01-01
For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information. PMID:18651791
Hesse, Constanze; Schenk, Thomas
2014-05-01
It has been suggested that while movements directed at visible targets are processed within the dorsal stream, movements executed after delay rely on the visual representations of the ventral stream (Milner & Goodale, 2006). This interpretation is supported by the observation that a patient with ventral stream damage (D.F.) has trouble performing accurate movements after a delay, but performs normally when the target is visible during movement programming. We tested D.F.'s visuomotor performance in a letter-posting task whilst varying the amount of visual feedback available. Additionally, we also varied whether D.F. received tactile feedback at the end of each trial (posting through a letter box vs posting on a screen) and whether environmental cues were available during the delay period (removing the target only vs suppressing vision completely with shutter glasses). We found that in the absence of environmental cues patient D.F. was unaffected by the introduction of delay and performed as accurately as healthy controls. However, when environmental cues and vision of the moving hand were available during and after the delay period, D.F.'s visuomotor performance was impaired. Thus, while healthy controls benefit from the availability of environmental landmarks and/or visual feedback of the moving hand, such cues seem less beneficial to D.F. Taken together our findings suggest that ventral stream damage does not always impact the ability to make delayed movements but compromises the ability to use environmental landmarks and visual feedback efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.
The role of executive functioning in memory performance in pediatric focal epilepsy
Sepeta, Leigh N.; Casaletto, Kaitlin Blackstone; Terwilliger, Virginia; Facella-Ervolini, Joy; Sady, Maegan; Mayo, Jessica; Gaillard, William D.; Berl, Madison M.
2016-01-01
Objective Learning and memory are essential for academic success and everyday functioning, but the pattern of memory skills and its relationship to executive functioning in children with focal epilepsy is not fully delineated. We address a gap in the literature by examining the relationship between memory and executive functioning in a pediatric focal epilepsy population. Methods Seventy children with focal epilepsy and 70 typically developing children matched on age, intellectual functioning, and gender underwent neuropsychological assessment, including measures of intelligence (WASI/DAS), as well as visual (CMS Dot Locations) and verbal episodic memory (WRAML Story Memory and CVLT-C). Executive functioning was measured directly (WISC-IV Digit Span Backward; CELF-IV Recalling Sentences) and by parent report (Behavior Rating Inventory of Executive Function (BRIEF)). Results Children with focal epilepsy had lower delayed free recall scores than controls across visual and verbal memory tasks (p = 0.02; partial η2 = .12). In contrast, recognition memory performance was similar for patients and controls (p = 0.36; partial η2 = .03). Children with focal epilepsy demonstrated difficulties in working memory (p = 0.02; partial η2 = .08) and planning/organization (p = 0.02) compared to controls. Working memory predicted 9–19% of the variance in delayed free recall for verbal and visual memory; organization predicted 9–10% of the variance in verbal memory. Patients with both left and right focal epilepsy demonstrated more difficulty on verbal versus visual tasks (p = 0.002). Memory performance did not differ by location of seizure foci (temporal vs. extra-temporal, frontal vs. extra-frontal). Significance Children with focal epilepsy demonstrated memory ability within age-level expectations, but delayed free recall was inefficient compared to typically developing controls. Memory difficulties were not related to general cognitive impairment or seizure localization. Executive functioning accounted for significant variance in memory performance, suggesting that poor executive control negatively influences memory retrieval. PMID:28111742
Task set induces dynamic reallocation of resources in visual short-term memory.
Sheremata, Summer L; Shomstein, Sarah
2017-08-01
Successful interaction with the environment requires the ability to flexibly allocate resources to different locations in the visual field. Recent evidence suggests that visual short-term memory (VSTM) resources are distributed asymmetrically across the visual field based upon task demands. Here, we propose that context, rather than the stimulus itself, determines asymmetrical distribution of VSTM resources. To test whether context modulates the reallocation of resources to the right visual field, task set, defined by memory-load, was manipulated to influence visual short-term memory performance. Performance was measured for single-feature objects embedded within predominantly single- or two-feature memory blocks. Therefore, context was varied to determine whether task set directly predicts changes in visual field biases. In accord with the dynamic reallocation of resources hypothesis, task set, rather than aspects of the physical stimulus, drove improvements in performance in the right- visual field. Our results show, for the first time, that preparation for upcoming memory demands directly determines how resources are allocated across the visual field.
ERIC Educational Resources Information Center
Marshall, Lindsey; Meachem, Lester
2007-01-01
In this scoping study we have investigated the integration of subject-specific software into the structure of visual communications courses. There is a view that the response within visual communications courses to the rapid developments in technology has been linked to necessity rather than by design. Through perceptions of staff with day-to-day…
Name recognition in autism: EEG evidence of altered patterns of brain activity and connectivity.
Nowicka, Anna; Cygan, Hanna B; Tacikowski, Paweł; Ostaszewski, Paweł; Kuś, Rafał
2016-01-01
Impaired orienting to social stimuli is one of the core early symptoms of autism spectrum disorder (ASD). However, in contrast to faces, name processing has rarely been studied in individuals with ASD. Here, we investigated brain activity and functional connectivity associated with recognition of names in the high-functioning ASD group and in the control group. EEG was recorded in 15 young males with ASD and 15 matched one-to-one control individuals. EEG data were analyzed with the event-related potential (ERP), event-related desynchronization and event-related synchronization (ERD/S), as well as coherence and direct transfer function (DTF) methods. Four categories of names were presented visually: one's own, close-other's, famous, and unknown. Differences between the ASD and control groups were found for ERP, coherence, and DTF. In individuals with ASD, P300 (a positive ERP component) to own-name and to a close-other's name were similar whereas in control participants, P300 to own-name was enhanced when compared to all other names. Analysis of coherence and DTF revealed disruption of fronto-posterior task-related connectivity in individuals with ASD within the beta range frequencies. Moreover, DTF indicated the directionality of those impaired connections-they were going from parieto-occipital to frontal regions. DTF also showed inter-group differences in short-range connectivity: weaker connections within the frontal region and stronger connections within the occipital region in the ASD group in comparison to the control group. Our findings suggest a lack of the self-preference effect and impaired functioning of the attentional network during recognition of visually presented names in individuals with ASD.
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.
2016-01-01
Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486
Wisniewski, Amy B; Prendeville, Mary T; Dobs, Adrian S
2005-04-01
This study examined the impact of sex hormones on functional cerebral hemispheric lateralization and cognition in a group of male-to-female transsexuals receiving cross-sex hormone therapy compared to eugonadal men with a male gender identity. Cerebral lateralization was measured with a handedness questionnaire and a visual-split-field paradigm and cognitive tests sensitive to sex hormone exposure (identical pictures, 3-D mental rotation, building memory) were also administered. Endocrine measures on the day of participation for transsexual and control subjects included total testosterone, free testosterone, estradiol, gonadotropins, and sex hormone binding globulin concentrations. Compared to controls, male-to-female transsexuals had elevated estradiol and sex hormone binding globulin concentrations and suppressed testosterone concentrations. Transsexual subjects showed a trend toward less exclusive right-handedness than controls. No group differences were observed on the visual-split-field or cognitive tasks. No direct associations were observed between endocrine measures and the laterality measures and cognitive performance. Previous observations of female-typical patterns in cerebral lateralization and cognitive performance in male-to-female transsexuals were not found in the current study.
Otolith Dysfunction Alters Exploratory Movement in Mice
Blankenship, Philip A.; Cherep, Lucia A.; Donaldson, Tia N.; Brockman, Sarah N.; Trainer, Alexandria D.; Yoder, Ryan M.; Wallace, Douglas G.
2017-01-01
The organization of rodent exploratory behavior appears to depend on self-movement cue processing. As of yet, however, no studies have directly examined the vestibular system’s contribution to the organization of exploratory movement. The current study sequentially segmented open field behavior into progressions and stops in order to characterize differences in movement organization between control and otoconia-deficient tilted mice under conditions with and without access to visual cues. Under completely dark conditions, tilted mice exhibited similar distance traveled and stop times overall, but had significantly more circuitous progressions, larger changes in heading between progressions, and less stable clustering of home bases, relative to control mice. In light conditions, control and tilted mice were similar on all measures except for the change in heading between progressions. This pattern of results is consistent with otoconia-deficient tilted mice using visual cues to compensate for impaired self-movement cue processing. This work provides the first empirical evidence that signals from the otolithic organs mediate the organization of exploratory behavior, based on a novel assessment of spatial orientation. PMID:28235587
Aging and goal-directed emotional attention: distraction reverses emotional biases.
Knight, Marisa; Seymour, Travis L; Gaunt, Joshua T; Baker, Christopher; Nesmith, Kathryn; Mather, Mara
2007-11-01
Previous findings reveal that older adults favor positive over negative stimuli in both memory and attention (for a review, see Mather & Carstensen, 2005). This study used eye tracking to investigate the role of cognitive control in older adults' selective visual attention. Younger and older adults viewed emotional-neutral and emotional-emotional pairs of faces and pictures while their gaze patterns were recorded under full or divided attention conditions. Replicating previous eye-tracking findings, older adults allocated less of their visual attention to negative stimuli in negative-neutral stimulus pairings in the full attention condition than younger adults did. However, as predicted by a cognitive-control-based account of the positivity effect in older adults' information processing tendencies (Mather & Knight, 2005), older adults' tendency to avoid negative stimuli was reversed in the divided attention condition. Compared with younger adults, older adults' limited attentional resources were more likely to be drawn to negative stimuli when they were distracted. These findings indicate that emotional goals can have unintended consequences when cognitive control mechanisms are not fully available.