Axonal Conduction Delays, Brain State, and Corticogeniculate Communication
2017-01-01
Thalamocortical conduction times are short, but layer 6 corticothalamic axons display an enormous range of conduction times, some exceeding 40–50 ms. Here, we investigate (1) how axonal conduction times of corticogeniculate (CG) neurons are related to the visual information conveyed to the thalamus, and (2) how alert versus nonalert awake brain states affect visual processing across the spectrum of CG conduction times. In awake female Dutch-Belted rabbits, we found 58% of CG neurons to be visually responsive, and 42% to be unresponsive. All responsive CG neurons had simple, orientation-selective receptive fields, and generated sustained responses to stationary stimuli. CG axonal conduction times were strongly related to modulated firing rates (F1 values) generated by drifting grating stimuli, and their associated interspike interval distributions, suggesting a continuum of visual responsiveness spanning the spectrum of axonal conduction times. CG conduction times were also significantly related to visual response latency, contrast sensitivity (C-50 values), directional selectivity, and optimal stimulus velocity. Increasing alertness did not cause visually unresponsive CG neurons to become responsive and did not change the response linearity (F1/F0 ratios) of visually responsive CG neurons. However, for visually responsive CG neurons, increased alertness nearly doubled the modulated response amplitude to optimal visual stimulation (F1 values), significantly shortened response latency, and dramatically increased response reliability. These effects of alertness were uniform across the broad spectrum of CG axonal conduction times. SIGNIFICANCE STATEMENT Corticothalamic neurons of layer 6 send a dense feedback projection to thalamic nuclei that provide input to sensory neocortex. While sensory information reaches the cortex after brief thalamocortical axonal delays, corticothalamic axons can exhibit conduction delays of <2 ms to 40–50 ms. Here, in the corticogeniculate visual system of awake rabbits, we investigate the functional significance of this axonal diversity, and the effects of shifting alert/nonalert brain states on corticogeniculate processing. We show that axonal conduction times are strongly related to multiple visual response properties, suggesting a continuum of visual responsiveness spanning the spectrum of corticogeniculate axonal conduction times. We also show that transitions between awake brain states powerfully affect corticogeniculate processing, in some ways more strongly than in layer 4. PMID:28559382
Axonal Conduction Delays, Brain State, and Corticogeniculate Communication.
Stoelzel, Carl R; Bereshpolova, Yulia; Alonso, Jose-Manuel; Swadlow, Harvey A
2017-06-28
Thalamocortical conduction times are short, but layer 6 corticothalamic axons display an enormous range of conduction times, some exceeding 40-50 ms. Here, we investigate (1) how axonal conduction times of corticogeniculate (CG) neurons are related to the visual information conveyed to the thalamus, and (2) how alert versus nonalert awake brain states affect visual processing across the spectrum of CG conduction times. In awake female Dutch-Belted rabbits, we found 58% of CG neurons to be visually responsive, and 42% to be unresponsive. All responsive CG neurons had simple, orientation-selective receptive fields, and generated sustained responses to stationary stimuli. CG axonal conduction times were strongly related to modulated firing rates (F1 values) generated by drifting grating stimuli, and their associated interspike interval distributions, suggesting a continuum of visual responsiveness spanning the spectrum of axonal conduction times. CG conduction times were also significantly related to visual response latency, contrast sensitivity (C-50 values), directional selectivity, and optimal stimulus velocity. Increasing alertness did not cause visually unresponsive CG neurons to become responsive and did not change the response linearity (F1/F0 ratios) of visually responsive CG neurons. However, for visually responsive CG neurons, increased alertness nearly doubled the modulated response amplitude to optimal visual stimulation (F1 values), significantly shortened response latency, and dramatically increased response reliability. These effects of alertness were uniform across the broad spectrum of CG axonal conduction times. SIGNIFICANCE STATEMENT Corticothalamic neurons of layer 6 send a dense feedback projection to thalamic nuclei that provide input to sensory neocortex. While sensory information reaches the cortex after brief thalamocortical axonal delays, corticothalamic axons can exhibit conduction delays of <2 ms to 40-50 ms. Here, in the corticogeniculate visual system of awake rabbits, we investigate the functional significance of this axonal diversity, and the effects of shifting alert/nonalert brain states on corticogeniculate processing. We show that axonal conduction times are strongly related to multiple visual response properties, suggesting a continuum of visual responsiveness spanning the spectrum of corticogeniculate axonal conduction times. We also show that transitions between awake brain states powerfully affect corticogeniculate processing, in some ways more strongly than in layer 4. Copyright © 2017 the authors 0270-6474/17/376342-17$15.00/0.
Peripheral visual response time and visual display layout
NASA Technical Reports Server (NTRS)
Haines, R. F.
1974-01-01
Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
Neocortical Rebound Depolarization Enhances Visual Perception
Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji
2015-01-01
Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866
Frýbort, Pavel; Kokštejn, Jakub; Musálek, Martin; Süss, Vladimír
2016-06-01
A soccer player's capability to control and manage his behaviour in a game situation is a prerequisite, reflecting not only swift and accurate tactical decision-making, but also prompt implementation of a motor task during intermittent exercise conditions. The purpose of this study was to analyse the relationship between varying exercise intensity and the visual-motor response time and the accuracy of motor response in an offensive game situation in soccer. The participants (n = 42) were male, semi-professional, soccer players (M age 18.0 ± 0.9 years) and trained five times a week. Each player performed four different modes of exercise intensity on the treadmill (motor inactivity, aerobic, intermittent and anaerobic activity). After the end of each exercise, visual-motor response time and accuracy of motor response were assessed. Players' motion was captured by digital video camera. ANOVA indicated no significant difference (p = 0.090) in the accuracy of motor response between the four exercise intensity modes. Practical significance (Z-test = 0.31) was found in visual-motor response time between exercise with dominant involvement of aerobic metabolism, and intense intermittent exercise. A medium size effect (Z-test = 0.34) was also found in visual-motor response time between exercise with dominant involvement of aerobic metabolism and exercise with dominant involvement of anaerobic metabolism, which was confirmed by ANOVA (897.02 ± 57.46 vs. 940.95 ± 71.14; p = 0.002). The results showed that different modes of exercise intensity do not adversely affect the accuracy of motor responses; however, high-intensity exercise has a negative effect on visual-motor response time in comparison to moderate intensity exercise. Key pointsDifferent exercise intensity modes did not affect the accuracy of motor response.Anaerobic, highly intensive short-term exercise significantly decreased the visual-motor response time in comparison with aerobic exercise.Further research should focus on the assessment of VMRT from a player's real - field position view rather than a perspective view.
The effect of spectral filters on visual search in stroke patients.
Beasley, Ian G; Davies, Leon N
2013-01-01
Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.
Responses to Targets in the Visual Periphery in Deaf and Normal-Hearing Adults
ERIC Educational Resources Information Center
Rothpletz, Ann M.; Ashmead, Daniel H.; Tharpe, Anne Marie
2003-01-01
The purpose of this study was to compare the response times of deaf and normal-hearing individuals to the onset of target events in the visual periphery in distracting and nondistracting conditions. Visual reaction times to peripheral targets placed at 3 eccentricities to the left and right of a center fixation point were measured in prelingually…
Marino, Robert A; Levy, Ron; Munoz, Douglas P
2015-08-01
Express saccades represent the fastest possible eye movements to visual targets with reaction times that approach minimum sensory-motor conduction delays. Previous work in monkeys has identified two specific neural signals in the superior colliculus (SC: a midbrain sensorimotor integration structure involved in gaze control) that are required to execute express saccades: 1) previsual activity consisting of a low-frequency increase in action potentials in sensory-motor neurons immediately before the arrival of a visual response; and 2) a transient visual-sensory response consisting of a high-frequency burst of action potentials in visually responsive neurons resulting from the appearance of a visual target stimulus. To better understand how these two neural signals interact to produce express saccades, we manipulated the arrival time and magnitude of visual responses in the SC by altering target luminance and we examined the corresponding influences on SC activity and express saccade generation. We recorded from saccade neurons with visual-, motor-, and previsual-related activity in the SC of monkeys performing the gap saccade task while target luminance was systematically varied between 0.001 and 42.5 cd/m(2) against a black background (∼0.0001 cd/m(2)). Our results demonstrated that 1) express saccade latencies were linked directly to the arrival time in the SC of visual responses produced by abruptly appearing visual stimuli; 2) express saccades were generated toward both dim and bright targets whenever sufficient previsual activity was present; and 3) target luminance altered the likelihood of producing an express saccade. When an express saccade was generated, visuomotor neurons increased their activity immediately before the arrival of the visual response in the SC and saccade initiation. Furthermore, the visual and motor responses of visuomotor neurons merged into a single burst of action potentials, while the visual response of visual-only neurons was unaffected. A linear combination model was used to test which SC signals best predicted the likelihood of producing an express saccade. In addition to visual response magnitude and previsual activity of saccade neurons, the model identified presaccadic activity (activity occurring during the 30-ms epoch immediately before saccade initiation) as a third important signal for predicting express saccades. We conclude that express saccades can be predicted by visual, previsual, and presaccadic signals recorded from visuomotor neurons in the intermediate layers of the SC. Copyright © 2015 the American Physiological Society.
Levy, Ron; Munoz, Douglas P.
2015-01-01
Express saccades represent the fastest possible eye movements to visual targets with reaction times that approach minimum sensory-motor conduction delays. Previous work in monkeys has identified two specific neural signals in the superior colliculus (SC: a midbrain sensorimotor integration structure involved in gaze control) that are required to execute express saccades: 1) previsual activity consisting of a low-frequency increase in action potentials in sensory-motor neurons immediately before the arrival of a visual response; and 2) a transient visual-sensory response consisting of a high-frequency burst of action potentials in visually responsive neurons resulting from the appearance of a visual target stimulus. To better understand how these two neural signals interact to produce express saccades, we manipulated the arrival time and magnitude of visual responses in the SC by altering target luminance and we examined the corresponding influences on SC activity and express saccade generation. We recorded from saccade neurons with visual-, motor-, and previsual-related activity in the SC of monkeys performing the gap saccade task while target luminance was systematically varied between 0.001 and 42.5 cd/m2 against a black background (∼0.0001 cd/m2). Our results demonstrated that 1) express saccade latencies were linked directly to the arrival time in the SC of visual responses produced by abruptly appearing visual stimuli; 2) express saccades were generated toward both dim and bright targets whenever sufficient previsual activity was present; and 3) target luminance altered the likelihood of producing an express saccade. When an express saccade was generated, visuomotor neurons increased their activity immediately before the arrival of the visual response in the SC and saccade initiation. Furthermore, the visual and motor responses of visuomotor neurons merged into a single burst of action potentials, while the visual response of visual-only neurons was unaffected. A linear combination model was used to test which SC signals best predicted the likelihood of producing an express saccade. In addition to visual response magnitude and previsual activity of saccade neurons, the model identified presaccadic activity (activity occurring during the 30-ms epoch immediately before saccade initiation) as a third important signal for predicting express saccades. We conclude that express saccades can be predicted by visual, previsual, and presaccadic signals recorded from visuomotor neurons in the intermediate layers of the SC. PMID:26063770
Comparing capacity coefficient and dual task assessment of visual multitasking workload
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaha, Leslie M.
Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less
Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli
Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.
2010-01-01
Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356
Frýbort, Pavel; Kokštejn, Jakub; Musálek, Martin; Süss, Vladimír
2016-01-01
A soccer player’s capability to control and manage his behaviour in a game situation is a prerequisite, reflecting not only swift and accurate tactical decision-making, but also prompt implementation of a motor task during intermittent exercise conditions. The purpose of this study was to analyse the relationship between varying exercise intensity and the visual-motor response time and the accuracy of motor response in an offensive game situation in soccer. The participants (n = 42) were male, semi-professional, soccer players (M age 18.0 ± 0.9 years) and trained five times a week. Each player performed four different modes of exercise intensity on the treadmill (motor inactivity, aerobic, intermittent and anaerobic activity). After the end of each exercise, visual-motor response time and accuracy of motor response were assessed. Players’ motion was captured by digital video camera. ANOVA indicated no significant difference (p = 0.090) in the accuracy of motor response between the four exercise intensity modes. Practical significance (Z-test = 0.31) was found in visual-motor response time between exercise with dominant involvement of aerobic metabolism, and intense intermittent exercise. A medium size effect (Z-test = 0.34) was also found in visual-motor response time between exercise with dominant involvement of aerobic metabolism and exercise with dominant involvement of anaerobic metabolism, which was confirmed by ANOVA (897.02 ± 57.46 vs. 940.95 ± 71.14; p = 0.002). The results showed that different modes of exercise intensity do not adversely affect the accuracy of motor responses; however, high-intensity exercise has a negative effect on visual-motor response time in comparison to moderate intensity exercise. Key points Different exercise intensity modes did not affect the accuracy of motor response. Anaerobic, highly intensive short-term exercise significantly decreased the visual-motor response time in comparison with aerobic exercise. Further research should focus on the assessment of VMRT from a player’s real - field position view rather than a perspective view. PMID:27274671
Kooiker, M J G; Pel, J J M; van der Steen, J
2014-06-01
Children with visual impairments are very heterogeneous in terms of the extent of visual and developmental etiology. The aim of the present study was to investigate a possible correlation between prevalence of clinical risk factors of visual processing impairments and characteristics of viewing behavior. We tested 149 children with visual information processing impairments (90 boys, 59 girls; mean age (SD)=7.3 (3.3)) and 127 children without visual impairments (63 boys and 64 girls, mean age (SD)=7.9 (2.8)). Visual processing impairments were classified based on the time it took to complete orienting responses to various visual stimuli (form, contrast, motion detection, motion coherence, color and a cartoon). Within the risk group, children were divided into a fast, medium or slow group based on the response times to a highly salient stimulus. The relationship between group specific response times and clinical risk factors was assessed. The fast responding children in the risk group were significantly slower than children in the control group. Within the risk group, the prevalence of cerebral visual impairment, brain damage and intellectual disabilities was significantly higher in slow responding children compared to faster responding children. The presence of nystagmus, perceptual dysfunctions, mean visual acuity and mean age did not significantly differ between the subgroups. Orienting responses are related to risk factors for visual processing impairments known to be prevalent in visual rehabilitation practice. The proposed method may contribute to assessing the effectiveness of visual information processing in children. Copyright © 2014 Elsevier Ltd. All rights reserved.
Visual Information Processing and Response Time in Traffic-Signal Cognition
1992-03-01
TYPE AND DATES COVERED IMarch 1992 IMaster’s Thesis 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS VISUAL INEORMATION PIRXC ING AND RESPOSE TIME IN TRAFFIC...34The Influence of the Time Duration of Yellow Traffic Signals On Driver Response," ITE Journal, (November 1980). 145 William D. Kosnic. " Self
ERIC Educational Resources Information Center
Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake
2012-01-01
Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…
Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David
2014-01-22
Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.
Representation of vestibular and visual cues to self-motion in ventral intraparietal (VIP) cortex
Chen, Aihua; Deangelis, Gregory C.; Angelaki, Dora E.
2011-01-01
Convergence of vestibular and visual motion information is important for self-motion perception. One cortical area that combines vestibular and optic flow signals is the ventral intraparietal area (VIP). We characterized unisensory and multisensory responses of macaque VIP neurons to translations and rotations in three dimensions. Approximately half of VIP cells show significant directional selectivity in response to optic flow, half show tuning to vestibular stimuli, and one-third show multisensory responses. Visual and vestibular direction preferences of multisensory VIP neurons could be congruent or opposite. When visual and vestibular stimuli were combined, VIP responses could be dominated by either input, unlike medial superior temporal area (MSTd) where optic flow tuning typically dominates or the visual posterior sylvian area (VPS) where vestibular tuning dominates. Optic flow selectivity in VIP was weaker than in MSTd but stronger than in VPS. In contrast, vestibular tuning for translation was strongest in VPS, intermediate in VIP, and weakest in MSTd. To characterize response dynamics, direction-time data were fit with a spatiotemporal model in which temporal responses were modeled as weighted sums of velocity, acceleration, and position components. Vestibular responses in VIP reflected balanced contributions of velocity and acceleration, whereas visual responses were dominated by velocity. Timing of vestibular responses in VIP was significantly faster than in MSTd, whereas timing of optic flow responses did not differ significantly among areas. These findings suggest that VIP may be proximal to MSTd in terms of vestibular processing but hierarchically similar to MSTd in terms of optic flow processing. PMID:21849564
The effect of linguistic and visual salience in visual world studies.
Cavicchio, Federica; Melcher, David; Poesio, Massimo
2014-01-01
Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.
Effect of prolonged bedrest and plus Gz acceleration on peripheral visual response time
NASA Technical Reports Server (NTRS)
Haines, R. F.
1973-01-01
Peripheral visual response time changes during +G sub z acceleration following fourteen days of bedrest are considered as well as what effect prolonged bedrest has upon this response. Eighteen test lights, placed 10 deg are apart along the horizontal meridian of the subject's field of view, were presented in a random sequence. The subject was instructed to press a button as soon as a light appeared. Response time testing occurred periodically during bedrest and continuously during centrifugation testing. The results indicate that: (1) mean response time is significantly longer to stimuli imaged in the far periphery than to stimuli imaged closer to the line of sight; (2) mean response time at each stimulus position tends to be longer at plateau g than during the preacceleration baseline period; (3) mean response time tends to lengthen as the g level is increased; (4) peripheral visual response time during +G sub x acceleration at 2, 3.2, and 3.8 g was not a reliable advanced indicator that blackout was going to occur; and (5) the subject's field of view collapsed rapidly just before blackout. Bedrest data showed that the distribution of response times to stimuli imaged across the subject's horizontal retinal meridian remained remarkably constant from day to day during both the bedrest and recovery periods.
VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R
2016-01-01
People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia. Published by Elsevier B.V.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
NASA Technical Reports Server (NTRS)
Hosman, R. J. A. W.; Vandervaart, J. C.
1984-01-01
An experiment to investigate visual roll attitude and roll rate perception is described. The experiment was also designed to assess the improvements of perception due to cockpit motion. After the onset of the motion, subjects were to make accurate and quick estimates of the final magnitude of the roll angle step response by pressing the appropriate button of a keyboard device. The differing time-histories of roll angle, roll rate and roll acceleration caused by a step response stimulate the different perception processes related the central visual field, peripheral visual field and vestibular organs in different, yet exactly known ways. Experiments with either of the visual displays or cockpit motion and some combinations of these were run to asses the roles of the different perception processes. Results show that the differences in response time are much more pronounced than the differences in perception accuracy.
Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam
2011-08-03
Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.
Visual Motion Processing Subserves Faster Visuomotor Reaction in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2017-06-01
Athletes participating in ball or racquet sports have to respond to visual stimuli under critical time pressure. Previous studies used visual contrast stimuli to determine visual perception and visuomotor reaction in athletes and nonathletes; however, ball and racquet sports are characterized by motion rather than contrast visual cues. Because visual contrast and motion signals are processed in different cortical regions, this study aimed to determine differences in perception and processing of visual motion between athletes and nonathletes. Twenty-five skilled badminton players and 28 age-matched nonathletic controls participated in this study. Using a 64-channel EEG system, we investigated visual motion perception/processing in the motion-sensitive middle temporal (MT) cortical area in response to radial motion of different velocities. In a simple visuomotor reaction task, visuomotor transformation in Brodmann area 6 (BA6) and BA4 as well as muscular activation (EMG onset) and visuomotor reaction time (VMRT) were investigated. Stimulus- and response-locked potentials were determined to differentiate between perceptual and motor-related processes. As compared with nonathletes, athletes showed earlier EMG onset times (217 vs 178 ms, P < 0.001), accompanied by a faster VMRT (274 vs 243 ms, P < 0.001). Furthermore, athletes showed an earlier stimulus-locked peak activation of MT (200 vs 182 ms, P = 0.002) and BA6 (161 vs 137 ms, P = 0.009). Response-locked peak activation in MT was later in athletes (-7 vs 26 ms, P < 0.001), whereas no group differences were observed in BA6 and BA4. Multiple regression analyses with stimulus- and response-locked cortical potentials predicted EMG onset (r = 0.83) and VMRT (r = 0.77). The athletes' superior visuomotor performance in response to visual motion is primarily related to visual perception and, to a minor degree, to motor-related processes.
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
Bansal, Arjun K.; Singer, Jedediah M.; Anderson, William S.; Golby, Alexandra; Madsen, Joseph R.
2012-01-01
The cerebral cortex needs to maintain information for long time periods while at the same time being capable of learning and adapting to changes. The degree of stability of physiological signals in the human brain in response to external stimuli over temporal scales spanning hours to days remains unclear. Here, we quantitatively assessed the stability across sessions of visually selective intracranial field potentials (IFPs) elicited by brief flashes of visual stimuli presented to 27 subjects. The interval between sessions ranged from hours to multiple days. We considered electrodes that showed robust visual selectivity to different shapes; these electrodes were typically located in the inferior occipital gyrus, the inferior temporal cortex, and the fusiform gyrus. We found that IFP responses showed a strong degree of stability across sessions. This stability was evident in averaged responses as well as single-trial decoding analyses, at the image exemplar level as well as at the category level, across different parts of visual cortex, and for three different visual recognition tasks. These results establish a quantitative evaluation of the degree of stationarity of visually selective IFP responses within and across sessions and provide a baseline for studies of cortical plasticity and for the development of brain-machine interfaces. PMID:22956795
Integrated evaluation of visually induced motion sickness in terms of autonomic nervous regulation.
Kiryu, Tohru; Tada, Gen; Toyama, Hiroshi; Iijima, Atsuhiko
2008-01-01
To evaluate visually-induced motion sickness, we integrated subjective and objective responses in terms of autonomic nervous regulation. Twenty-seven subjects viewed a 2-min-long first-person-view video section five times (total 10 min) continuously. Measured biosignals, the RR interval, respiration, and blood pressure, were used to estimate the indices related to autonomic nervous activity (ANA). Then we determined the trigger points and some sensation sections based on the time-varying behavior of ANA-related indices. We found that there was a suitable combination of biosignals to present the symptoms of visually-induced motion sickness. Based on the suitable combination, integrating trigger points and subjective scores allowed us to represent the time-distribution of subjective responses during visual exposure, and helps us to understand what types of camera motions will cause visually-induced motion sickness.
What Are the Shapes of Response Time Distributions in Visual Search?
ERIC Educational Resources Information Center
Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.
2011-01-01
Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…
ERIC Educational Resources Information Center
Sung, Kyongje
2008-01-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…
Seeing the hand while reaching speeds up on-line responses to a sudden change in target position
Reichenbach, Alexandra; Thielscher, Axel; Peer, Angelika; Bülthoff, Heinrich H; Bresciani, Jean-Pierre
2009-01-01
Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame. PMID:19675067
A comparative study of visual reaction time in table tennis players and healthy controls.
Bhabhor, Mahesh K; Vidja, Kalpesh; Bhanderi, Priti; Dodhia, Shital; Kathrotia, Rajesh; Joshi, Varsha
2013-01-01
Visual reaction time is time required to response to visual stimuli. The present study was conducted to measure visual reaction time in 209 subjects, 50 table tennis (TT) players and 159 healthy controls. The visual reaction time was measured by the direct RT computerized software in healthy controls and table tennis players. Simple visual reaction time was measured. During the reaction time testing, visual stimuli were given for eighteen times and average reaction time was taken as the final reaction time. The study shows that table tennis players had faster reaction time than healthy controls. On multivariate analysis, it was found that TT players had 74.121 sec (95% CI 98.8 and 49.4 sec) faster reaction time compared to non-TT players of same age and BMI. Also playing TT has a profound influence on visual reaction time than BMI. Our study concluded that persons involved in sports are having good reaction time as compared to controls. These results support the view that playing of table tennis is beneficial to eye-hand reaction time, improve the concentration and alertness.
Effects of visual attention on chromatic and achromatic detection sensitivities.
Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko
2014-05-01
Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.
An integrated theory of attention and decision making in visual signal detection.
Smith, Philip L; Ratcliff, Roger
2009-04-01
The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in this task is described. The theory links visual encoding, masking, spatial attention, visual short-term memory (VSTM), and perceptual decision making in an integrated dynamic framework. The theory assumes that decisions are made by a diffusion process driven by a neurally plausible, shunting VSTM. The VSTM trace encodes the transient outputs of early visual filters in a durable form that is preserved for the time needed to make a decision. Attention increases the efficiency of VSTM encoding, either by increasing the rate of trace formation or by reducing the delay before trace formation begins. The theory provides a detailed, quantitative account of attentional effects in spatial cuing tasks at the level of response accuracy and the response time distributions. (c) 2009 APA, all rights reserved
Evidence for an attentional component of inhibition of return in visual search.
Pierce, Allison M; Crouse, Monique D; Green, Jessica J
2017-11-01
Inhibition of return (IOR) is typically described as an inhibitory bias against returning attention to a recently attended location as a means of promoting efficient visual search. Most studies examining IOR, however, either do not use visual search paradigms or do not effectively isolate attentional processes, making it difficult to conclusively link IOR to a bias in attention. Here, we recorded ERPs during a simple visual search task designed to isolate the attentional component of IOR to examine whether an inhibitory bias of attention is observed and, if so, how it influences visual search behavior. Across successive visual search displays, we found evidence of both a broad, hemisphere-wide inhibitory bias of attention along with a focal, target location-specific facilitation. When the target appeared in the same visual hemifield in successive searches, responses were slower and the N2pc component was reduced, reflecting a bias of attention away from the previously attended side of space. When the target occurred at the same location in successive searches, responses were facilitated and the P1 component was enhanced, likely reflecting spatial priming of the target. These two effects are combined in the response times, leading to a reduction in the IOR effect for repeated target locations. Using ERPs, however, these two opposing effects can be isolated in time, demonstrating that the inhibitory biasing of attention still occurs even when response-time slowing is ameliorated by spatial priming. © 2017 Society for Psychophysiological Research.
Do Visual Processing Deficits Cause Problem on Response Time Task for Dyslexics?
ERIC Educational Resources Information Center
Sigmundsson, H.
2005-01-01
This study was set out to explore the prediction that dyslexics would be likely to have particular problems compared to control group, on response time task when 'driving' a car simulator. The reason for doing so stems from the fact that there is considerable body of research on visual processing difficulties manifested by dyslexics. The task was…
Reimer, Christina B; Schubert, Torsten
2017-09-15
Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2, respectively. In both experiments, the d' analysis revealed that response selection did not affect target detection. Overall, Experiments 1-4 indicated that neither the response selection difficulty in the auditory Task 1 (i.e., two-choice vs. four-choice) nor the type of presentation of the search display in Task 2 (i.e., not masked vs. masked) impaired parallel processing of response selection and conjunction search. We concluded that in general, response selection and visual attention (i.e., feature binding) rely on distinct capacity limitations.
Tohmi, Manavu; Kitaura, Hiroki; Komagata, Seiji; Kudoh, Masaharu; Shibuki, Katsuei
2006-11-08
Experience-dependent plasticity in the visual cortex was investigated using transcranial flavoprotein fluorescence imaging in mice anesthetized with urethane. On- and off-responses in the primary visual cortex were elicited by visual stimuli. Fluorescence responses and field potentials elicited by grating patterns decreased similarly as contrasts of visual stimuli were reduced. Fluorescence responses also decreased as spatial frequency of grating stimuli increased. Compared with intrinsic signal imaging in the same mice, fluorescence imaging showed faster responses with approximately 10 times larger signal changes. Retinotopic maps in the primary visual cortex and area LM were constructed using fluorescence imaging. After monocular deprivation (MD) of 4 d starting from postnatal day 28 (P28), deprived eye responses were suppressed compared with nondeprived eye responses in the binocular zone but not in the monocular zone. Imaging faithfully recapitulated a critical period for plasticity with maximal effects of MD observed around P28 and not in adulthood even under urethane anesthesia. Visual responses were compared before and after MD in the same mice, in which the skull was covered with clear acrylic dental resin. Deprived eye responses decreased after MD, whereas nondeprived eye responses increased. Effects of MD during a critical period were tested 2 weeks after reopening of the deprived eye. Significant ocular dominance plasticity was observed in responses elicited by moving grating patterns, but no long-lasting effect was found in visual responses elicited by light-emitting diode light stimuli. The present results indicate that transcranial flavoprotein fluorescence imaging is a powerful tool for investigating experience-dependent plasticity in the mouse visual cortex.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma.
Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E; Bollinger, Kathryn; Devos, Hannes
2017-01-01
Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal-Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups ( p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1-Q3) 3 (2-6.50) vs. 2 (0.50-2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2-6) vs. 1 (0.50-2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls ( p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma.
Katwal, Santosh B; Gore, John C; Marois, Rene; Rogers, Baxter P
2013-09-01
We present novel graph-based visualizations of self-organizing maps for unsupervised functional magnetic resonance imaging (fMRI) analysis. A self-organizing map is an artificial neural network model that transforms high-dimensional data into a low-dimensional (often a 2-D) map using unsupervised learning. However, a postprocessing scheme is necessary to correctly interpret similarity between neighboring node prototypes (feature vectors) on the output map and delineate clusters and features of interest in the data. In this paper, we used graph-based visualizations to capture fMRI data features based upon 1) the distribution of data across the receptive fields of the prototypes (density-based connectivity); and 2) temporal similarities (correlations) between the prototypes (correlation-based connectivity). We applied this approach to identify task-related brain areas in an fMRI reaction time experiment involving a visuo-manual response task, and we correlated the time-to-peak of the fMRI responses in these areas with reaction time. Visualization of self-organizing maps outperformed independent component analysis and voxelwise univariate linear regression analysis in identifying and classifying relevant brain regions. We conclude that the graph-based visualizations of self-organizing maps help in advanced visualization of cluster boundaries in fMRI data enabling the separation of regions with small differences in the timings of their brain responses.
2017-10-01
networks of the brain responsible for visual processing, mood regulation, motor coordination, sensory processing, and language command, but increased...4 For each subject, the rsFMRI voxel time-series were temporally shifted to account for differences in slice acquisition times...responsible for visual processing, mood regulation, motor coordination, sensory processing, and language command, but increased connectivity in
Single-exposure visual memory judgments are reflected in inferotemporal cortex
Meyer, Travis
2018-01-01
Our visual memory percepts of whether we have encountered specific objects or scenes before are hypothesized to manifest as decrements in neural responses in inferotemporal cortex (IT) with stimulus repetition. To evaluate this proposal, we recorded IT neural responses as two monkeys performed a single-exposure visual memory task designed to measure the rates of forgetting with time. We found that a weighted linear read-out of IT was a better predictor of the monkeys’ forgetting rates and reaction time patterns than a strict instantiation of the repetition suppression hypothesis, expressed as a total spike count scheme. Behavioral predictions could be attributed to visual memory signals that were reflected as repetition suppression and were intermingled with visual selectivity, but only when combined across the most sensitive neurons. PMID:29517485
Adaptive Kalman filtering for real-time mapping of the visual field
Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.
2013-01-01
This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663
Neuronal basis of covert spatial attention in the frontal eye field.
Thompson, Kirk G; Biscoe, Keri L; Sato, Takashi R
2005-10-12
The influential "premotor theory of attention" proposes that developing oculomotor commands mediate covert visual spatial attention. A likely source of this attentional bias is the frontal eye field (FEF), an area of the frontal cortex involved in converting visual information into saccade commands. We investigated the link between FEF activity and covert spatial attention by recording from FEF visual and saccade-related neurons in monkeys performing covert visual search tasks without eye movements. Here we show that the source of attention signals in the FEF is enhanced activity of visually responsive neurons. At the time attention is allocated to the visual search target, nonvisually responsive saccade-related movement neurons are inhibited. Therefore, in the FEF, spatial attention signals are independent of explicit saccade command signals. We propose that spatially selective activity in FEF visually responsive neurons corresponds to the mental spotlight of attention via modulation of ongoing visual processing.
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
ERIC Educational Resources Information Center
Kim, Yong-Jin; Chang, Nam-Kee
2001-01-01
Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…
Watch what you type: the role of visual feedback from the screen and hands in skilled typewriting.
Snyder, Kristy M; Logan, Gordon D; Yamaguchi, Motonori
2015-01-01
Skilled typing is controlled by two hierarchically structured processing loops (Logan & Crump, 2011): The outer loop, which produces words, commands the inner loop, which produces keystrokes. Here, we assessed the interplay between the two loops by investigating how visual feedback from the screen (responses either were or were not echoed on the screen) and the hands (the hands either were or were not covered with a box) influences the control of skilled typing. Our results indicated, first, that the reaction time of the first keystroke was longer when responses were not echoed than when they were. Also, the interkeystroke interval (IKSI) was longer when the hands were covered than when they were visible, and the IKSI for responses that were not echoed was longer when explicit error monitoring was required (Exp. 2) than when it was not required (Exp. 1). Finally, explicit error monitoring was more accurate when response echoes were present than when they were absent, and implicit error monitoring (i.e., posterror slowing) was not influenced by visual feedback from the screen or the hands. These findings suggest that the outer loop adjusts the inner-loop timing parameters to compensate for reductions in visual feedback. We suggest that these adjustments are preemptive control strategies designed to execute keystrokes more cautiously when visual feedback from the hands is absent, to generate more cautious motor programs when visual feedback from the screen is absent, and to enable enough time for the outer loop to monitor keystrokes when visual feedback from the screen is absent and explicit error reports are required.
A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.
Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei
2014-09-19
Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A physiologically based nonhomogeneous Poisson counter model of visual identification.
Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren
2018-04-30
A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Kemmer, Laura; Coulson, Seana; Kutas, Marta
2014-02-01
Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere's processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun ("The grateful niece asked herself/*themselves…") or morphologically, e.g., subject/verb ("Industrial scientists develop/*develops…"). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. Copyright © 2013 Elsevier B.V. All rights reserved.
Kemmer, Laura; Coulson, Seana; Kutas, Marta
2014-01-01
Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere’s processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun (“The grateful niece asked herself/*themselves…”) or morphologically, e.g., subject/verb (“Industrial scientists develop/*develops…”). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. PMID:24326084
[Sound improves distinction of low intensities of light in the visual cortex of a rabbit].
Polianskiĭ, V B; Alymkulov, D E; Evtikhin, D V; Chernyshev, B V
2011-01-01
Electrodes were implanted into cranium above the primary visual cortex of four rabbits (Orictolagus cuniculus). At the first stage, visual evoked potentials (VEPs) were recorded in response to substitution of threshold visual stimuli (0.28 and 0.31 cd/m2). Then the sound (2000 Hz, 84 dB, duration 40 ms) was added simultaneously to every visual stimulus. Single sounds (without visual stimuli) did not produce a VEP-response. It was found that the amplitude ofVEP component N1 (85-110 ms) in response to complex stimuli (visual and sound) increased 1.6 times as compared to "simple" visual stimulation. At the second stage, paired substitutions of 8 different visual stimuli (range 0.38-20.2 cd/m2) by each other were performed. Sensory spaces of intensity were reconstructed on the basis of factor analysis. Sensory spaces of complexes were reconstructed in a similar way for simultaneous visual and sound stimulation. Comparison of vectors representing the stimuli in the spaces showed that the addition of a sound led to a 1.4-fold expansion of the space occupied by smaller intensities (0.28; 1.02; 3.05; 6.35 cd/m2). Also, the addition of the sound led to an arrangement of intensities in an ascending order. At the same time, the sound 1.33-times narrowed the space of larger intensities (8.48; 13.7; 16.8; 20.2 cd/m2). It is suggested that the addition of a sound improves a distinction of smaller intensities and impairs a dis- tinction of larger intensities. Sensory spaces revealed by complex stimuli were two-dimensional. This fact can be a consequence of integration of sound and light in a unified complex at simultaneous stimulation.
Effect of Cognitive Demand on Functional Visual Field Performance in Senior Drivers with Glaucoma
Gangeddula, Viswa; Ranchet, Maud; Akinwuntan, Abiodun E.; Bollinger, Kathryn; Devos, Hannes
2017-01-01
Purpose: To investigate the effect of cognitive demand on functional visual field performance in drivers with glaucoma. Method: This study included 20 drivers with open-angle glaucoma and 13 age- and sex-matched controls. Visual field performance was evaluated under different degrees of cognitive demand: a static visual field condition (C1), dynamic visual field condition (C2), and dynamic visual field condition with active driving (C3) using an interactive, desktop driving simulator. The number of correct responses (accuracy) and response times on the visual field task were compared between groups and between conditions using Kruskal–Wallis tests. General linear models were employed to compare cognitive workload, recorded in real-time through pupillometry, between groups and conditions. Results: Adding cognitive demand (C2 and C3) to the static visual field test (C1) adversely affected accuracy and response times, in both groups (p < 0.05). However, drivers with glaucoma performed worse than did control drivers when the static condition changed to a dynamic condition [C2 vs. C1 accuracy; glaucoma: median difference (Q1–Q3) 3 (2–6.50) vs. controls: 2 (0.50–2.50); p = 0.05] and to a dynamic condition with active driving [C3 vs. C1 accuracy; glaucoma: 2 (2–6) vs. controls: 1 (0.50–2); p = 0.02]. Overall, drivers with glaucoma exhibited greater cognitive workload than controls (p = 0.02). Conclusion: Cognitive demand disproportionately affects functional visual field performance in drivers with glaucoma. Our results may inform the development of a performance-based visual field test for drivers with glaucoma. PMID:28912712
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
A Simple Network Architecture Accounts for Diverse Reward Time Responses in Primary Visual Cortex
Hussain Shuler, Marshall G.; Shouval, Harel Z.
2015-01-01
Many actions performed by animals and humans depend on an ability to learn, estimate, and produce temporal intervals of behavioral relevance. Exemplifying such learning of cued expectancies is the observation of reward-timing activity in the primary visual cortex (V1) of rodents, wherein neural responses to visual cues come to predict the time of future reward as behaviorally experienced in the past. These reward-timing responses exhibit significant heterogeneity in at least three qualitatively distinct classes: sustained increase or sustained decrease in firing rate until the time of expected reward, and a class of cells that reach a peak in firing at the expected delay. We elaborate upon our existing model by including inhibitory and excitatory units while imposing simple connectivity rules to demonstrate what role these inhibitory elements and the simple architectures play in sculpting the response dynamics of the network. We find that simply adding inhibition is not sufficient for obtaining the different distinct response classes, and that a broad distribution of inhibitory projections is necessary for obtaining peak-type responses. Furthermore, although changes in connection strength that modulate the effects of inhibition onto excitatory units have a strong impact on the firing rate profile of these peaked responses, the network exhibits robustness in its overall ability to predict the expected time of reward. Finally, we demonstrate how the magnitude of expected reward can be encoded at the expected delay in the network and how peaked responses express this reward expectancy. SIGNIFICANCE STATEMENT Heterogeneity in single-neuron responses is a common feature of neuronal systems, although sometimes, in theoretical approaches, it is treated as a nuisance and seldom considered as conveying a different aspect of a signal. In this study, we focus on the heterogeneous responses in the primary visual cortex of rodents trained with a predictable delayed reward time. We describe under what conditions this heterogeneity can arise by self-organization, and what information it can convey. This study, while focusing on a specific system, provides insight onto how heterogeneity can arise in general while also shedding light onto mechanisms of reinforcement learning using realistic biological assumptions. PMID:26377457
Looking and touching: What extant approaches reveal about the structure of early word knowledge
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2014-01-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711
Visual development in primates: Neural mechanisms and critical periods
Kiorpes, Lynne
2015-01-01
Despite many decades of research into the development of visual cortex, it remains unclear what neural processes set limitations on the development of visual function and define its vulnerability to abnormal visual experience. This selected review examines the development of visual function and its neural correlates, and highlights the fact that in most cases receptive field properties of infant neurons are substantially more mature than infant visual function. One exception is temporal resolution, which can be accounted for by resolution of neurons at the level of the LGN. In terms of spatial vision, properties of single neurons alone are not sufficient to account for visual development. Different visual functions develop over different time courses. Their onset may be limited by the existence of neural response properties that support a given perceptual ability, but the subsequent time course of maturation to adult levels remains unexplained. Several examples are offered suggesting that taking account of weak signaling by infant neurons, correlated firing, and pooled responses of populations of neurons brings us closer to an understanding of the relationship between neural and behavioral development. PMID:25649764
NASA Astrophysics Data System (ADS)
Park, Byeongjin; Sohn, Hoon
2018-04-01
The practicality of laser ultrasonic scanning is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated defect visualization technique is developed to visualize defect with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio of measured ultrasonic responses. The approximate defect boundary is identified by examining the interactions between ultrasonic waves and defect observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and defect can be better identified in the spatial ultrasonic domain. Then, the area inside the identified defect boundary is visualized as defect. The performance of the proposed defect visualization technique is validated through an experiment on a semiconductor chip. The proposed defect visualization technique accelerates the defect visualization process in three aspects: (1) The number of measurements that is necessary for defect visualization is dramatically reduced by a binary search algorithm; (2) The number of averaging that is necessary to achieve a high signal-to-noise ratio is reduced by maintaining the wave propagation distance short; and (3) With the proposed technique, defect can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.
The priming function of in-car audio instruction.
Keyes, Helen; Whitmore, Antony; Naneva, Stanislava; McDermott, Daragh
2018-05-01
Studies to date have focused on the priming power of visual road signs, but not the priming potential of audio road scene instruction. Here, the relative priming power of visual, audio, and multisensory road scene instructions was assessed. In a lab-based study, participants responded to target road scene turns following visual, audio, or multisensory road turn primes which were congruent or incongruent to the primes in direction, or control primes. All types of instruction (visual, audio, and multisensory) were successful in priming responses to a road scene. Responses to multisensory-primed targets (both audio and visual) were faster than responses to either audio or visual primes alone. Incongruent audio primes did not affect performance negatively in the manner of incongruent visual or multisensory primes. Results suggest that audio instructions have the potential to prime drivers to respond quickly and safely to their road environment. Peak performance will be observed if audio and visual road instruction primes can be timed to co-occur.
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Visual Working Memory Enhances the Neural Response to Matching Visual Input.
Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp
2017-07-12
Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.
A method for real-time visual stimulus selection in the study of cortical object perception.
Leeds, Daniel D; Tarr, Michael J
2016-06-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.
A method for real-time visual stimulus selection in the study of cortical object perception
Leeds, Daniel D.; Tarr, Michael J.
2016-01-01
The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168
Modulation of Neuronal Responses by Exogenous Attention in Macaque Primary Visual Cortex.
Wang, Feng; Chen, Minggui; Yan, Yin; Zhaoping, Li; Li, Wu
2015-09-30
Visual perception is influenced by attention deployed voluntarily or triggered involuntarily by salient stimuli. Modulation of visual cortical processing by voluntary or endogenous attention has been extensively studied, but much less is known about how involuntary or exogenous attention affects responses of visual cortical neurons. Using implanted microelectrode arrays, we examined the effects of exogenous attention on neuronal responses in the primary visual cortex (V1) of awake monkeys. A bright annular cue was flashed either around the receptive fields of recorded neurons or in the opposite visual field to capture attention. A subsequent grating stimulus probed the cue-induced effects. In a fixation task, when the cue-to-probe stimulus onset asynchrony (SOA) was <240 ms, the cue induced a transient increase of neuronal responses to the probe at the cued location during 40-100 ms after the onset of neuronal responses to the probe. This facilitation diminished and disappeared after repeated presentations of the same cue but recurred for a new cue of a different color. In another task to detect the probe, relative shortening of monkey's reaction times for the validly cued probe depended on the SOA in a way similar to the cue-induced V1 facilitation, and the behavioral and physiological cueing effects remained after repeated practice. Flashing two cues simultaneously in the two opposite visual fields weakened or diminished both the physiological and behavioral cueing effects. Our findings indicate that exogenous attention significantly modulates V1 responses and that the modulation strength depends on both novelty and task relevance of the stimulus. Significance statement: Visual attention can be involuntarily captured by a sudden appearance of a conspicuous object, allowing rapid reactions to unexpected events of significance. The current study discovered a correlate of this effect in monkey primary visual cortex. An abrupt, salient, flash enhanced neuronal responses, and shortened the animal's reaction time, to a subsequent visual probe stimulus at the same location. However, the enhancement of the neural responses diminished after repeated exposures to this flash if the animal was not required to react to the probe. Moreover, a second, simultaneous, flash at another location weakened the neuronal and behavioral effects of the first one. These findings revealed, beyond the observations reported so far, the effects of exogenous attention in the brain. Copyright © 2015 the authors 0270-6474/15/3513419-11$15.00/0.
Encoding model of temporal processing in human visual cortex.
Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit
2017-12-19
How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.
Validating a visual version of the metronome response task.
Laflamme, Patrick; Seli, Paul; Smilek, Daniel
2018-02-12
The metronome response task (MRT)-a sustained-attention task that requires participants to produce a response in synchrony with an audible metronome-was recently developed to index response variability in the context of studies on mind wandering. In the present studies, we report on the development and validation of a visual version of the MRT (the visual metronome response task; vMRT), which uses the rhythmic presentation of visual, rather than auditory, stimuli. Participants completed the vMRT (Studies 1 and 2) and the original (auditory-based) MRT (Study 2) while also responding to intermittent thought probes asking them to report the depth of their mind wandering. The results showed that (1) individual differences in response variability during the vMRT are highly reliable; (2) prior to thought probes, response variability increases with increasing depth of mind wandering; (3) response variability is highly consistent between the vMRT and the original MRT; and (4) both response variability and depth of mind wandering increase with increasing time on task. Our results indicate that the original MRT findings are consistent across the visual and auditory modalities, and that the response variability measured in both tasks indexes a non-modality-specific tendency toward behavioral variability. The vMRT will be useful in the place of the MRT in experimental contexts in which researchers' designs require a visual-based primary task.
A comparative study on visual choice reaction time for different colors in females.
Balakrishnan, Grrishma; Uppinakudru, Gurunandan; Girwar Singh, Gaur; Bangera, Shobith; Dutt Raghavendra, Aswini; Thangavel, Dinesh
2014-01-01
Reaction time is one of the important methods to study a person's central information processing speed and coordinated peripheral movement response. Visual choice reaction time is a type of reaction time and is very important for drivers, pilots, security guards, and so forth. Previous studies were mainly on simple reaction time and there are very few studies on visual choice reaction time. The aim of our study was to compare the visual choice reaction time for red, green, and yellow colors of 60 healthy undergraduate female volunteers. After giving adequate practice, visual choice reaction time was recorded for red, green, and yellow colors using reaction time machine (RTM 608, Medicaid, Chandigarh). Repeated measures of ANOVA and Bonferroni multiple comparison were used for analysis and P < 0.05 was considered statistically significant. The results showed that both red and green had significantly less choice visual choice reaction (P values <0.0001 and 0.0002) when compared with yellow. This could be because individual color mental processing time for yellow color is more than red and green.
DVA as a Diagnostic Test for Vestibulo-Ocular Reflex Function
NASA Technical Reports Server (NTRS)
Wood, Scott J.; Appelbaum, Meghan
2010-01-01
The vestibulo-ocular reflex (VOR) stabilizes vision on earth-fixed targets by eliciting eyes movements in response to changes in head position. How well the eyes perform this task can be functionally measured by the dynamic visual acuity (DVA) test. We designed a passive, horizontal DVA test to specifically study the acuity and reaction time when looking in different target locations. Visual acuity was compared among 12 subjects using a standard Landolt C wall chart, a computerized static (no rotation) acuity test and dynamic acuity test while oscillating at 0.8 Hz (+/-60 deg/s). In addition, five trials with yaw oscillation randomly presented a visual target in one of nine different locations with the size and presentation duration of the visual target varying across trials. The results showed a significant difference between the static and dynamic threshold acuities as well as a significant difference between the visual targets presented in the horizontal plane versus those in the vertical plane when comparing accuracy of vision and reaction time of the response. Visual acuity increased proportional to the size of the visual target and increased between 150 and 300 msec duration. We conclude that dynamic visual acuity varies with target location, with acuity optimized for targets in the plane of rotation. This DVA test could be used as a functional diagnostic test for visual-vestibular and neuro-cognitive impairments by assessing both accuracy and reaction time to acquire visual targets.
Johnson, Christopher M; Pate, Mariah B; Postma, Gregory N
2018-04-01
Standard KTP laser (potassium titanyl phosphate) wavelength-specific protective eyewear often impairs visualization of tissue changes during laser treatment. This sometimes necessitates eyewear removal to evaluate tissue effects, which wastes time and poses safety concerns. The objective was to determine if "virtual" or "electronic" chromoendoscopy filters, as found on some endoscopy platforms, could alleviate the restricted visualization inherent to protective eyewear. A KTP laser was applied to porcine laryngeal tissue and recorded via video laryngoscopy with 1 optical (Olympus Narrow Band Imaging) and 8 digital (Pentax Medical I-scan) chromoendoscopy filters. Videos were viewed by 11 otolaryngologists wearing protective eyewear. Using a discrete visual analog scale, they rated each filter on its ability to improve visualization,. No filter impaired visualization; 5 of 9 improved visualization. Based on statistical significance, the number of positive responses, and the lack of negative responses, narrow band imaging and the I-scan tone enhancement filter for leukoplakia performed best. These filters could shorten procedure time and improve safety; therefore, further clinical evaluation is warranted.
Binocular summation and peripheral visual response time
NASA Technical Reports Server (NTRS)
Gilliland, K.; Haines, R. F.
1975-01-01
Six males were administered a peripheral visual response time test to the onset of brief small stimuli imaged in 10-deg arc separation intervals across the dark adapted horizontal retinal meridian under both binocular and monocular viewing conditions. This was done in an attempt to verify the existence of peripheral binocular summation using a response time measure. The results indicated that from 50-deg arc right to 50-deg arc left of the line of sight binocular summation is a reasonable explanation for the significantly faster binocular data. The stimulus position by viewing eye interaction was also significant. A discussion of these and other analyses is presented along with a review of related literature.
Response time to colored stimuli in the full visual field
NASA Technical Reports Server (NTRS)
Haines, R. F.; Dawson, L. M.; Galvan, T.; Reid, L. M.
1975-01-01
Peripheral visual response time was measured in seven dark adapted subjects to the onset of small (45' arc diam), brief (50 msec), colored (blue, yellow, green, red) and white stimuli imaged at 72 locations within their binocular field of view. The blue, yellow, and green stimuli were matched for brightness at about 2.6 sub log 10 units above their absolute light threshold, and they appeared at an unexpected time and location. These data were obtained to provide response time and no-response data for use in various design disciplines involving instrument panel layout. The results indicated that the retina possesses relatively concentric regions within each of which mean response time can be expected to be of approximately the same duration. These regions are centered near the fovea and extend farther horizontally than vertically. Mean foveal response time was fastest for yellow and slowest for blue. Three and one-half percent of the total 56,410 trials presented resulted in no-responses. Regardless of stimulus color, the lowest percentage of no-responses occurred within 30 deg arc from the fovea and the highest within 40 deg to 80 deg arc below the fovea.
Looking and touching: what extant approaches reveal about the structure of early word knowledge.
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2015-09-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
2018-04-01
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco
2004-04-01
Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.
Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J
2006-01-01
The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.
Sung, Kyongje
2008-12-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.
NASA Astrophysics Data System (ADS)
Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru
A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.
Effect of glaucoma on eye movement patterns and laboratory-based hazard detection ability
Black, Alex A.; Wood, Joanne M.
2017-01-01
Purpose The mechanisms underlying the elevated crash rates of older drivers with glaucoma are poorly understood. A key driving skill is timely detection of hazards; however, the hazard detection ability of drivers with glaucoma has been largely unexplored. This study assessed the eye movement patterns and visual predictors of performance on a laboratory-based hazard detection task in older drivers with glaucoma. Methods Participants included 30 older drivers with glaucoma (71±7 years; average better-eye mean deviation (MD) = −3.1±3.2 dB; average worse-eye MD = −11.9±6.2 dB) and 25 age-matched controls (72±7 years). Visual acuity, contrast sensitivity, visual fields, useful field of view (UFoV; processing speeds), and motion sensitivity were assessed. Participants completed a computerised Hazard Perception Test (HPT) while their eye movements were recorded using a desk-mounted Tobii TX300 eye-tracking system. The HPT comprises a series of real-world traffic videos recorded from the driver’s perspective; participants responded to road hazards appearing in the videos, and hazard response times were determined. Results Participants with glaucoma exhibited an average of 0.42 seconds delay in hazard response time (p = 0.001), smaller saccades (p = 0.010), and delayed first fixation on hazards (p<0.001) compared to controls. Importantly, larger saccades were associated with faster hazard responses in the glaucoma group (p = 0.004), but not in the control group (p = 0.19). Across both groups, significant visual predictors of hazard response times included motion sensitivity, UFoV, and worse-eye MD (p<0.05). Conclusions Older drivers with glaucoma had delayed hazard response times compared to controls, with associated changes in eye movement patterns. The association between larger saccades and faster hazard response time in the glaucoma group may represent a compensatory behaviour to facilitate improved performance. PMID:28570621
Real-time detection and discrimination of visual perception using electrocorticographic signals
NASA Astrophysics Data System (ADS)
Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.
2018-06-01
Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.
A Simple Network Architecture Accounts for Diverse Reward Time Responses in Primary Visual Cortex.
Huertas, Marco A; Hussain Shuler, Marshall G; Shouval, Harel Z
2015-09-16
Many actions performed by animals and humans depend on an ability to learn, estimate, and produce temporal intervals of behavioral relevance. Exemplifying such learning of cued expectancies is the observation of reward-timing activity in the primary visual cortex (V1) of rodents, wherein neural responses to visual cues come to predict the time of future reward as behaviorally experienced in the past. These reward-timing responses exhibit significant heterogeneity in at least three qualitatively distinct classes: sustained increase or sustained decrease in firing rate until the time of expected reward, and a class of cells that reach a peak in firing at the expected delay. We elaborate upon our existing model by including inhibitory and excitatory units while imposing simple connectivity rules to demonstrate what role these inhibitory elements and the simple architectures play in sculpting the response dynamics of the network. We find that simply adding inhibition is not sufficient for obtaining the different distinct response classes, and that a broad distribution of inhibitory projections is necessary for obtaining peak-type responses. Furthermore, although changes in connection strength that modulate the effects of inhibition onto excitatory units have a strong impact on the firing rate profile of these peaked responses, the network exhibits robustness in its overall ability to predict the expected time of reward. Finally, we demonstrate how the magnitude of expected reward can be encoded at the expected delay in the network and how peaked responses express this reward expectancy. Heterogeneity in single-neuron responses is a common feature of neuronal systems, although sometimes, in theoretical approaches, it is treated as a nuisance and seldom considered as conveying a different aspect of a signal. In this study, we focus on the heterogeneous responses in the primary visual cortex of rodents trained with a predictable delayed reward time. We describe under what conditions this heterogeneity can arise by self-organization, and what information it can convey. This study, while focusing on a specific system, provides insight onto how heterogeneity can arise in general while also shedding light onto mechanisms of reinforcement learning using realistic biological assumptions. Copyright © 2015 the authors 0270-6474/15/3512659-14$15.00/0.
Oculomotor Evidence for Top-Down Control following the Initial Saccade
Siebold, Alisha; van Zoest, Wieske; Donk, Mieke
2011-01-01
The goal of the current study was to investigate how salience-driven and goal-driven processes unfold during visual search over multiple eye movements. Eye movements were recorded while observers searched for a target, which was located on (Experiment 1) or defined as (Experiment 2) a specific orientation singleton. This singleton could either be the most, medium, or least salient element in the display. Results were analyzed as a function of response time separately for initial and second eye movements. Irrespective of the search task, initial saccades elicited shortly after the onset of the search display were primarily salience-driven whereas initial saccades elicited after approximately 250 ms were completely unaffected by salience. Initial saccades were increasingly guided in line with task requirements with increasing response times. Second saccades were completely unaffected by salience and were consistently goal-driven, irrespective of response time. These results suggest that stimulus-salience affects the visual system only briefly after a visual image enters the brain and has no effect thereafter. PMID:21931603
Different patterns of modality dominance across development.
Barnhart, Wesley R; Rivera, Samuel; Robinson, Christopher W
2018-01-01
The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Natural sleep modifies the rat electroretinogram.
Galambos, R; Juhász, G; Kékesi, A K; Nyitrai, G; Szilágyi, N
1994-01-01
We show here electroretinograms (ERGs) recorded from freely moving rats during sleep and wakefulness. Bilateral ERGs were evoked by flashes delivered through a light-emitting diode implanted under the skin above one eye and recorded through electrodes inside each orbit near the optic nerve. Additional electrodes over each visual cortex monitored the brain waves and collected flash-evoked cortical potentials to compare with the ERGs. Connections to the stimulating and recording instruments through a plug on the head made data collection possible at any time without physically disturbing the animal. The three major findings are (i) the ERG amplitude during slow-wave sleep can be 2 or more times that of the waking response; (ii) the ERG patterns in slow-wave and REM sleep are different; and (iii) the sleep-related ERG changes closely mimic those taking place at the same time in the responses evoked from the visual cortex. We conclude that the mechanisms that alter the visual cortical-evoked responses during sleep operate also and similarly at the retinal level. PMID:8197199
Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272
Age-related changes in event-cued visual and auditory prospective memory proper.
Uttl, Bob
2006-06-01
We rely upon prospective memory proper (ProMP) to bring back to awareness previously formed plans and intentions at the right place and time, and to enable us to act upon those plans and intentions. To examine age-related changes in ProMP, younger and older participants made decisions about simple stimuli (ongoing task) and at the same time were required to respond to a ProM cue, either a picture (visually cued ProM test) or a sound (auditorily cued ProM test), embedded in a simultaneously presented series of similar stimuli (either pictures or sounds). The cue display size or loudness increased across trials until a response was made. The cue size and cue loudness at the time of response indexed ProMP. The main results showed that both visual and auditory ProMP declined with age, and that such declines were mediated by age declines in sensory functions (visual acuity and hearing level), processing resources, working memory, intelligence, and ongoing task resource allocation.
Bressler, David W.; Silver, Michael A.
2010-01-01
Spatial attention improves visual perception and increases the amplitude of neural responses in visual cortex. In addition, spatial attention tasks and fMRI have been used to discover topographic visual field representations in regions outside visual cortex. We therefore hypothesized that requiring subjects to attend to a retinotopic mapping stimulus would facilitate the characterization of visual field representations in a number of cortical areas. In our study, subjects attended either a central fixation point or a wedge-shaped stimulus that rotated about the fixation point. Response reliability was assessed by computing coherence between the fMRI time series and a sinusoid with the same frequency as the rotating wedge stimulus. When subjects attended to the rotating wedge instead of ignoring it, the reliability of retinotopic mapping signals increased by approximately 50% in early visual cortical areas (V1, V2, V3, V3A/B, V4) and ventral occipital cortex (VO1) and by approximately 75% in lateral occipital (LO1, LO2) and posterior parietal (IPS0, IPS1 and IPS2) cortical areas. Additionally, one 5-minute run of retinotopic mapping in the attention-to-wedge condition produced responses as reliable as the average of three to five (early visual cortex) or more than five (lateral occipital, ventral occipital, and posterior parietal cortex) attention-to-fixation runs. These results demonstrate that allocating attention to the retinotopic mapping stimulus substantially reduces the amount of scanning time needed to determine the visual field representations in occipital and parietal topographic cortical areas. Attention significantly increased response reliability in every cortical area we examined and may therefore be a general mechanism for improving the fidelity of neural representations of sensory stimuli at multiple levels of the cortical processing hierarchy. PMID:20600961
Boosting pitch encoding with audiovisual interactions in congenital amusia.
Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne
2015-01-01
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Variability of visual responses of superior colliculus neurons depends on stimulus velocity.
Mochol, Gabriela; Wójcik, Daniel K; Wypych, Marek; Wróbel, Andrzej; Waleszczyk, Wioletta J
2010-03-03
Visually responding neurons in the superficial, retinorecipient layers of the cat superior colliculus receive input from two primarily parallel information processing channels, Y and W, which is reflected in their velocity response profiles. We quantified the time-dependent variability of responses of these neurons to stimuli moving with different velocities by Fano factor (FF) calculated in discrete time windows. The FF for cells responding to low-velocity stimuli, thus receiving W inputs, increased with the increase in the firing rate. In contrast, the dynamics of activity of the cells responding to fast moving stimuli, processed by Y pathway, correlated negatively with FF whether the response was excitatory or suppressive. These observations were tested against several types of surrogate data. Whereas Poisson description failed to reproduce the variability of all collicular responses, the inclusion of secondary structure to the generating point process recovered most of the observed features of responses to fast moving stimuli. Neither model could reproduce the variability of low-velocity responses, which suggests that, in this case, more complex time dependencies need to be taken into account. Our results indicate that Y and W channels may differ in reliability of responses to visual stimulation. Apart from previously reported morphological and physiological differences of the cells belonging to Y and W channels, this is a new feature distinguishing these two pathways.
Else, Jane E.; Ellis, Jason; Orme, Elizabeth
2015-01-01
Art is one of life’s great joys, whether it is beautiful, ugly, sublime or shocking. Aesthetic responses to visual art involve sensory, cognitive and visceral processes. Neuroimaging studies have yielded a wealth of information regarding aesthetic appreciation and beauty using visual art as stimuli, but few have considered the effect of expertise on visual and visceral responses. To study the time course of visual, cognitive and emotional processes in response to visual art we investigated the event-related potentials (ERPs) elicited whilst viewing and rating the visceral affect of three categories of visual art. Two groups, artists and non-artists viewed representational, abstract and indeterminate 20th century art. Early components, particularly the N1, related to attention and effort, and the P2, linked to higher order visual processing, was enhanced for artists when compared to non-artists. This effect was present for all types of art, but further enhanced for abstract art (AA), which was rated as having lowest visceral affect by the non-artists. The later, slow wave processes (500–1000 ms), associated with arousal and sustained attention, also show clear differences between the two groups in response to both type of art and visceral affect. AA increased arousal and sustained attention in artists, whilst it decreased in non-artists. These results suggest that aesthetic response to visual art is affected by both expertise and semantic content. PMID:27242497
The relation between visualization size, grouping, and user performance.
Gramazio, Connor C; Schloss, Karen B; Laidlaw, David H
2014-12-01
In this paper we make the following contributions: (1) we describe how the grouping, quantity, and size of visual marks affects search time based on the results from two experiments; (2) we report how search performance relates to self-reported difficulty in finding the target for different display types; and (3) we present design guidelines based on our findings to facilitate the design of effective visualizations. Both Experiment 1 and 2 asked participants to search for a unique target in colored visualizations to test how the grouping, quantity, and size of marks affects user performance. In Experiment 1, the target square was embedded in a grid of squares and in Experiment 2 the target was a point in a scatterplot. Search performance was faster when colors were spatially grouped than when they were randomly arranged. The quantity of marks had little effect on search time for grouped displays ("pop-out"), but increasing the quantity of marks slowed reaction time for random displays. Regardless of color layout (grouped vs. random), response times were slowest for the smallest mark size and decreased as mark size increased to a point, after which response times plateaued. In addition to these two experiments we also include potential application areas, as well as results from a small case study where we report preliminary findings that size may affect how users infer how visualizations should be used. We conclude with a list of design guidelines that focus on how to best create visualizations based on grouping, quantity, and size of visual marks.
Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.
Morrill, Ryan J; Hasenstaub, Andrea R
2018-03-14
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Hemispheric differences in visual search of simple line arrays.
Polich, J; DeFrancesco, D P; Garon, J F; Cohen, W
1990-01-01
The effects of perceptual organization on hemispheric visual-information processing were assessed with stimulus arrays composed of short lines arranged in columns. A visual-search task was employed in which subjects judged whether all the lines were vertical (same) or whether a single horizontal line was present (different). Stimulus-display organization was manipulated in two experiments by variation of line density, linear organization, and array size. In general, left-visual-field/right-hemisphere presentations demonstrated more rapid and accurate responses when the display was perceived as a whole. Right-visual-field/left-hemisphere superiorities were observed when the display organization coerced assessment of individual array elements because the physical qualities of the stimulus did not effect a gestalt whole. Response times increased somewhat with increases in array size, although these effects interacted with other stimulus variables. Error rates tended to follow the reaction-time patterns. The results suggest that laterality differences in visual search are governed by stimulus properties which contribute to, or inhibit, the perception of a display as a gestalt. The implications of these findings for theoretical interpretations of hemispheric specialization are discussed.
Noise and contrast comparison of visual and infrared images of hazards as seen inside an automobile
NASA Astrophysics Data System (ADS)
Meitzler, Thomas J.; Bryk, Darryl; Sohn, Eui J.; Lane, Kimberly; Bednarz, David; Jusela, Daniel; Ebenstein, Samuel; Smith, Gregory H.; Rodin, Yelena; Rankin, James S., II; Samman, Amer M.
2000-06-01
The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.
ERIC Educational Resources Information Center
Moores, Elisabeth; Cassim, Rizan; Talcott, Joel B.
2011-01-01
Difficulties in visual attention are increasingly being linked to dyslexia. To date, the majority of studies have inferred functionality of attention from response times to stimuli presented for an indefinite duration. However, in paradigms that use reaction times to investigate the ability to orient attention, a delayed reaction time could also…
Time course of discrimination between emotional facial expressions: the role of visual saliency.
Calvo, Manuel G; Nummenmaa, Lauri
2011-08-01
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto
2012-01-01
Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.
Reward speeds up and increases consistency of visual selective attention: a lifespan comparison.
Störmer, Viola; Eppinger, Ben; Li, Shu-Chen
2014-06-01
Children and older adults often show less favorable reward-based learning and decision making, relative to younger adults. It is unknown, however, whether reward-based processes that influence relatively early perceptual and attentional processes show similar lifespan differences. In this study, we investigated whether stimulus-reward associations affect selective visual attention differently across the human lifespan. Children, adolescents, younger adults, and older adults performed a visual search task in which the target colors were associated with either high or low monetary rewards. We discovered that high reward value speeded up response times across all four age groups, indicating that reward modulates attentional selection across the lifespan. This speed-up in response time was largest in younger adults, relative to the other three age groups. Furthermore, only younger adults benefited from high reward value in increasing response consistency (i.e., reduction of trial-by-trial reaction time variability). Our findings suggest that reward-based modulations of relatively early and implicit perceptual and attentional processes are operative across the lifespan, and the effects appear to be greater in adulthood. The age-specific effect of reward on reducing intraindividual response variability in younger adults likely reflects mechanisms underlying the development and aging of reward processing, such as lifespan age differences in the efficacy of dopaminergic modulation. Overall, the present results indicate that reward shapes visual perception across different age groups by biasing attention to motivationally salient events.
To speak or not to speak - A multiple resource perspective
NASA Technical Reports Server (NTRS)
Tsang, P. S.; Hartzell, E. J.; Rothschild, R. A.
1985-01-01
The desirability of employing speech response in a dynamic dual task situation was discussed from a multiple resource perspective. A secondary task technique was employed to examine the time-sharing performance of five dual tasks with various degrees of resource overlap according to the structure-specific resource model of Wickens (1980). The primary task was a visual/manual tracking task which required spatial processing. The secondary task was either another tracking task or a spatial transformation task with one of four input (visual or auditory) and output (manual or speech) configurations. The results show that the dual task performance was best when the primary tracking task was paired with the visual/speech transformation task. This finding was explained by an interaction of the stimulus-central processing-response compatibility of the transformation task and the degree of resource competition between the time-shared tasks. Implications on the utility of speech response were discussed.
NASA Astrophysics Data System (ADS)
Park, Byeongjin; Sohn, Hoon
2017-07-01
Laser ultrasonic scanning, especially full-field wave propagation imaging, is attractive for damage visualization thanks to its noncontact nature, sensitivity to local damage, and high spatial resolution. However, its practicality is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Inspired by binary search, an accelerated damage visualization technique is developed to visualize damage with a reduced scanning time. The pitch-catch distance between the excitation point and the sensing point is also fixed during scanning to maintain a high signal-to-noise ratio (SNR) of measured ultrasonic responses. The approximate damage boundary is identified by examining the interactions between ultrasonic waves and damage observed at the scanning points that are sparsely selected by a binary search algorithm. Here, a time-domain laser ultrasonic response is transformed into a spatial ultrasonic domain response using a basis pursuit approach so that the interactions between ultrasonic waves and damage, such as reflections and transmissions, can be better identified in the spatial ultrasonic domain. Then, the area inside the identified damage boundary is visualized as damage. The performance of the proposed damage visualization technique is validated excusing a numerical simulation performed on an aluminum plate with a notch and experiments performed on an aluminum plate with a crack and a wind turbine blade with delamination. The proposed damage visualization technique accelerates the damage visualization process in three aspects: (1) the number of measurements that is necessary for damage visualization is dramatically reduced by a binary search algorithm; (2) the number of averaging that is necessary to achieve a high SNR is reduced by maintaining the wave propagation distance short; and (3) with the proposed technique, the same damage can be identified with a lower spatial resolution than the spatial resolution required by full-field wave propagation imaging.
Artes, Paul H; McLeod, David; Henson, David B
2002-01-01
To report on differences between the latency distributions of responses to stimuli and to false-positive catch trials in suprathreshold perimetry. To describe an algorithm for defining response time windows and to report on its performance in discriminating between true- and false-positive responses on the basis of response time (RT). A sample of 435 largely inexperienced patients underwent suprathreshold visual field examination on a perimeter that was modified to record RTs. Data were analyzed from 60,500 responses to suprathreshold stimuli and from 523 false-positive responses to catch trials. False-positive responses had much more variable latencies than responses to suprathreshold stimuli. An algorithm defining RT windows on the basis of z-transformed individual latency samples correctly identified more than 70% of false-positive responses to catch trials, whereas fewer than 3% of responses to suprathreshold stimuli were classified as false-positive responses. Latency analysis can be used to detect a substantial proportion of false-positive responses in suprathreshold perimetry. Rejection of such responses may increase the reliability of visual field screening by reducing variability and bias in a small but clinically important proportion of patients.
Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn
2014-10-08
Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.
Visual functions in amblyopia as determinants of response to treatment.
Singh, Vinita; Agrawal, Siddharth
2013-01-01
To describe the visual functions in amblyopia as determinants of response to treatment. Sixty-nine patients with unilateral and bilateral amblyopia (114 amblyopic eyes) 3 to 15 years old (mean age: 8.80 ± 2.9 years), 40 males (58%) and 29 females (42%), were included in this study. All patients were treated by conventional occlusion 6 hours per day for mild to moderate amblyopia (visual acuity 0.70 or better) and full-time for 4 weeks followed by 6 hours per day for severe amblyopia (visual acuity 0.8 or worse). During occlusion, near activities requiring hand-eye coordination were advised. The follow-up examination was done at 3 and 6 months. Improvement in visual acuity was evaluated on the logMAR chart and correlated with the visual functions. Statistical analysis was done using Wilcoxon rank sum test (Mann-Whitney U test) and Kruskal-Wallis analysis. There was a statistically significant association of poor contrast sensitivity with the grade of amblyopia (P < .001). The grade of amblyopia (P < .01), accommodation (P < .01), stereopsis (P = .01), and mesopic visual acuity (P < .03) were found to have a correlation with response to amblyopia therapy. The grade of amblyopia (initial visual acuity) and accommodation are strong determinants of response to amblyopia therapy, whereas stereopsis and mesopic visual acuity have some value as determinants. Copyright 2013, SLACK Incorporated.
Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G
2015-04-01
Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. Copyright © 2015 the authors 0270-6474/15/355351-09$15.00/0.
Naicker, Preshanta; Anoopkumar-Dukie, Shailendra; Grant, Gary D; Modenese, Luca; Kavanagh, Justin J
2017-02-01
Anticholinergic medications largely exert their effects due to actions on the muscarinic receptor, which mediates the functions of acetylcholine in the peripheral and central nervous systems. In the central nervous system, acetylcholine plays an important role in the modulation of movement. This study investigated the effects of over-the-counter medications with varying degrees of central anticholinergic properties on fixation stability, saccadic response time and the dynamics associated with this eye movement during a temporally-cued visual reaction time task, in order to establish the significance of central cholinergic pathways in influencing eye movements during reaction time tasks. Twenty-two participants were recruited into the placebo-controlled, human double-blind, four-way crossover investigation. Eye tracking technology recorded eye movements while participants reacted to visual stimuli following temporally informative and uninformative cues. The task was performed pre-ingestion as well as 0.5 and 2 h post-ingestion of promethazine hydrochloride (strong centrally acting anticholinergic), hyoscine hydrobromide (moderate centrally acting anticholinergic), hyoscine butylbromide (anticholinergic devoid of central properties) and a placebo. Promethazine decreased fixation stability during the reaction time task. In addition, promethazine was the only drug to increase saccadic response time during temporally informative and uninformative cued trials, whereby effects on response time were more pronounced following temporally informative cues. Promethazine also decreased saccadic amplitude and increased saccadic duration during the temporally-cued reaction time task. Collectively, the results of the study highlight the significant role that central cholinergic pathways play in the control of eye movements during tasks that involve stimulus identification and motor responses following temporal cues.
Seeing is believing: information content and behavioural response to visual and chemical cues
Gonzálvez, Francisco G.; Rodríguez-Gironés, Miguel A.
2013-01-01
Predator avoidance and foraging often pose conflicting demands. Animals can decrease mortality risk searching for predators, but searching decreases foraging time and hence intake. We used this principle to investigate how prey should use information to detect, assess and respond to predation risk from an optimal foraging perspective. A mathematical model showed that solitary bees should increase flower examination time in response to predator cues and that the rate of false alarms should be negatively correlated with the relative value of the flower explored. The predatory ant, Oecophylla smaragdina, and the harmless ant, Polyrhachis dives, differ in the profile of volatiles they emit and in their visual appearance. As predicted, the solitary bee Nomia strigata spent more time examining virgin flowers in presence of predator cues than in their absence. Furthermore, the proportion of flowers rejected decreased from morning to noon, as the relative value of virgin flowers increased. In addition, bees responded differently to visual and chemical cues. While chemical cues induced bees to search around flowers, bees detecting visual cues hovered in front of them. These strategies may allow prey to identify the nature of visual cues and to locate the source of chemical cues. PMID:23698013
Pagan, Marino
2014-01-01
Finding sought objects requires the brain to combine visual and target signals to determine when a target is in view. To investigate how the brain implements these computations, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a delayed-match-to-sample target search task. Our data suggest that visual and target signals were combined within or before IT in the ventral visual pathway and then passed onto PRH, where they were reformatted into a more explicit target match signal over ∼10–15 ms. Accounting for these dynamics in PRH did not require proposing dynamic computations within PRH itself but, rather, could be attributed to instantaneous PRH computations performed upon an input representation from IT that changed with time. We found that the dynamics of the IT representation arose from two commonly observed features: individual IT neurons whose response preferences were not simply rescaled with time and variable response latencies across the population. Our results demonstrate that these types of time-varying responses have important consequences for downstream computation and suggest that dynamic representations can arise within a feedforward framework as a consequence of instantaneous computations performed upon time-varying inputs. PMID:25122904
Streepey, Jefferson W; Kenyon, Robert V; Keshner, Emily A
2007-01-01
We previously reported responses to induced postural instability in young healthy individuals viewing visual motion with a narrow (25 degrees in both directions) and wide (90 degrees and 55 degrees in the horizontal and vertical directions) field of view (FOV) as they stood on different sized blocks. Visual motion was achieved using an immersive virtual environment that moved realistically with head motion (natural motion) and translated sinusoidally at 0.1 Hz in the fore-aft direction (augmented motion). We observed that a subset of the subjects (steppers) could not maintain continuous stance on the smallest block when the virtual environment was in motion. We completed a posteriori analyses on the postural responses of the steppers and non-steppers that may inform us about the mechanisms underlying these differences in stability. We found that when viewing augmented motion with a wide FOV, there was a greater effect on the head and whole body center of mass and ankle angle root mean square (RMS) values of the steppers than of the non-steppers. FFT analyses revealed greater power at the frequency of the visual stimulus in the steppers compared to the non-steppers. Whole body COM time lags relative to the augmented visual scene revealed that the time-delay between the scene and the COM was significantly increased in the steppers. The increased responsiveness to visual information suggests a greater visual field-dependency of the steppers and suggests that the thresholds for shifting from a reliance on visual information to somatosensory information can differ even within a healthy population.
Deficient cortical face-sensitive N170 responses and basic visual processing in schizophrenia.
Maher, S; Mashhoon, Y; Ekstrom, T; Lukas, S; Chen, Y
2016-01-01
Face detection, an ability to identify a visual stimulus as a face, is impaired in patients with schizophrenia. It is unclear whether impaired face processing in this psychiatric disorder results from face-specific domains or stems from more basic visual domains. In this study, we examined cortical face-sensitive N170 response in schizophrenia, taking into account deficient basic visual contrast processing. We equalized visual contrast signals among patients (n=20) and controls (n=20) and between face and tree images, based on their individual perceptual capacities (determined using psychophysical methods). We measured N170, a putative temporal marker of face processing, during face detection and tree detection. In controls, N170 amplitudes were significantly greater for faces than trees across all three visual contrast levels tested (perceptual threshold, two times perceptual threshold and 100%). In patients, however, N170 amplitudes did not differ between faces and trees, indicating diminished face selectivity (indexed by the differential responses to face vs. tree). These results indicate a lack of face-selectivity in temporal responses of brain machinery putatively responsible for face processing in schizophrenia. This neuroimaging finding suggests that face-specific processing is compromised in this psychiatric disorder. Copyright © 2015 Elsevier B.V. All rights reserved.
Tachibanaki, Shuji; Arinobu, Daisuke; Shimauchi-Matsukawa, Yoshie; Tsushima, Sawae; Kawamura, Satoru
2005-06-28
Cone photoreceptors show briefer photoresponses than rod photoreceptors. Our previous study showed that visual pigment phosphorylation, a quenching mechanism of light-activated visual pigment, is much more rapid in cones than in rods. Here, we measured the early time course of this rapid phosphorylation with good time resolution and directly compared it with the photoresponse time course in cones. At the time of photoresponse recovery, almost two phosphates were incorporated into a bleached cone pigment molecule, which indicated that the visual pigment phosphorylation coincides with the photoresponse recovery. The rapid phosphorylation in cones is attributed to very high activity of visual pigment kinase [G protein-coupled receptor kinase (GRK) 7] in cones. Because of this high activity, cone pigment is readily phosphorylated at very high bleach levels, which probably explains why cone photoresponses recover quickly even after a very bright light and do not saturate under intense background light. The high GRK7 activity is brought about by high content of a highly potent enzyme. The expression level of GRK7 was 10 times higher than that of rod kinase (GRK1), and the specific activity of a single GRK7 molecule was approximately 10 times higher than that of GRK1. The specific activity of GRK7 is the highest among the GRKs so far known. Our result seems to explain the response characteristics of cone photoreceptors in many aspects, including the nonsaturation of the cone responses during daylight vision.
Murthy, Aditya; Ray, Supriya; Shorter, Stephanie M; Schall, Jeffrey D; Thompson, Kirk G
2009-05-01
The dynamics of visual selection and saccade preparation by the frontal eye field was investigated in macaque monkeys performing a search-step task combining the classic double-step saccade task with visual search. Reward was earned for producing a saccade to a color singleton. On random trials the target and one distractor swapped locations before the saccade and monkeys were rewarded for shifting gaze to the new singleton location. A race model accounts for the probabilities and latencies of saccades to the initial and final singleton locations and provides a measure of the duration of a covert compensation process-target-step reaction time. When the target stepped out of a movement field, noncompensated saccades to the original location were produced when movement-related activity grew rapidly to a threshold. Compensated saccades to the final location were produced when the growth of the original movement-related activity was interrupted within target-step reaction time and was replaced by activation of other neurons producing the compensated saccade. When the target stepped into a receptive field, visual neurons selected the new target location regardless of the monkeys' response. When the target stepped out of a receptive field most visual neurons maintained the representation of the original target location, but a minority of visual neurons showed reduced activity. Chronometric analyses of the neural responses to the target step revealed that the modulation of visually responsive neurons and movement-related neurons occurred early enough to shift attention and saccade preparation from the old to the new target location. These findings indicate that visual activity in the frontal eye field signals the location of targets for orienting, whereas movement-related activity instantiates saccade preparation.
Time and flow-direction responses of shear-styress-sensitive liquid crystal coatings
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muraqtore, J. J.; Heinick, James T.
1994-01-01
Time and flow-direction responses of shear-stress liquid crystal coatings were exploresd experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing schlieren system and recorded with a 100 frame/s color video camera.
Visual field asymmetries in visual evoked responses
Hagler, Donald J.
2014-01-01
Behavioral responses to visual stimuli exhibit visual field asymmetries, but cortical folding and the close proximity of visual cortical areas make electrophysiological comparisons between different stimulus locations problematic. Retinotopy-constrained source estimation (RCSE) uses distributed dipole models simultaneously constrained by multiple stimulus locations to provide separation between individual visual areas that is not possible with conventional source estimation methods. Magnetoencephalography and RCSE were used to estimate time courses of activity in V1, V2, V3, and V3A. Responses to left and right hemifield stimuli were not significantly different. Peak latencies for peripheral stimuli were significantly shorter than those for perifoveal stimuli in V1, V2, and V3A, likely related to the greater proportion of magnocellular input to V1 in the periphery. Consistent with previous results, sensor magnitudes for lower field stimuli were about twice as large as for upper field, which is only partially explained by the proximity to sensors for lower field cortical sources in V1, V2, and V3. V3A exhibited both latency and amplitude differences for upper and lower field responses. There were no differences for V3, consistent with previous suggestions that dorsal and ventral V3 are two halves of a single visual area, rather than distinct areas V3 and VP. PMID:25527151
Resolving human object recognition in space and time
Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude
2014-01-01
A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044
Time-varying bispectral analysis of visually evoked multi-channel EEG
NASA Astrophysics Data System (ADS)
Chandran, Vinod
2012-12-01
Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.
Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio
2015-02-19
Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.
Chromatic and Achromatic Spatial Resolution of Local Field Potentials in Awake Cortex
Jansen, Michael; Li, Xiaobing; Lashgari, Reza; Kremkow, Jens; Bereshpolova, Yulia; Swadlow, Harvey A.; Zaidi, Qasim; Alonso, Jose-Manuel
2015-01-01
Local field potentials (LFPs) have become an important measure of neuronal population activity in the brain and could provide robust signals to guide the implant of visual cortical prosthesis in the future. However, it remains unclear whether LFPs can detect weak cortical responses (e.g., cortical responses to equiluminant color) and whether they have enough visual spatial resolution to distinguish different chromatic and achromatic stimulus patterns. By recording from awake behaving macaques in primary visual cortex, here we demonstrate that LFPs respond robustly to pure chromatic stimuli and exhibit ∼2.5 times lower spatial resolution for chromatic than achromatic stimulus patterns, a value that resembles the ratio of achromatic/chromatic resolution measured with psychophysical experiments in humans. We also show that, although the spatial resolution of LFP decays with visual eccentricity as is also the case for single neurons, LFPs have higher spatial resolution and show weaker response suppression to low spatial frequencies than spiking multiunit activity. These results indicate that LFP recordings are an excellent approach to measure spatial resolution from local populations of neurons in visual cortex including those responsive to color. PMID:25416722
Timing of target discrimination in human frontal eye fields.
O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent
2004-01-01
Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.
ICASE/LaRC Symposium on Visualizing Time-Varying Data
NASA Technical Reports Server (NTRS)
Banks, D. C. (Editor); Crockett, T. W. (Editor); Stacy, K. (Editor)
1996-01-01
Time-varying datasets present difficult problems for both analysis and visualization. For example, the data may be terabytes in size, distributed across mass storage systems at several sites, with time scales ranging from femtoseconds to eons. In response to these challenges, ICASE and NASA Langley Research Center, in cooperation with ACM SIGGRAPH, organized the first symposium on visualizing time-varying data. The purpose was to bring the producers of time-varying data together with visualization specialists to assess open issues in the field, present new solutions, and encourage collaborative problem-solving. These proceedings contain the peer-reviewed papers which were presented at the symposium. They cover a broad range of topics, from methods for modeling and compressing data to systems for visualizing CFD simulations and World Wide Web traffic. Because the subject matter is inherently dynamic, a paper proceedings cannot adequately convey all aspects of the work. The accompanying video proceedings provide additional context for several of the papers.
2018-02-12
usability preference. Results under the second focus showed that the frequency with which participants expected status updates differed depending upon the...assistance requests for both navigational route and building selection depending on the type of exogenous visual cues displayed? 3) Is there a difference...in response time to visual reports for both navigational route and building selection depending on the type of exogenous visual cues displayed? 4
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
Reaction times to weak test lights. [psychophysics biological model
NASA Technical Reports Server (NTRS)
Wandell, B. A.; Ahumada, P.; Welsh, D.
1984-01-01
Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.
Perceptual and Physiological Responses to Jackson Pollock's Fractals
Taylor, Richard P.; Spehar, Branka; Van Donkelaar, Paul; Hagerhall, Caroline M.
2011-01-01
Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility – are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns. PMID:21734876
Applicability of Deep-Learning Technology for Relative Object-Based Navigation
2017-09-01
burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing...possible selections for navigating an unmanned ground vehicle (UGV) is through real- time visual odometry. To navigate in such an environment, the UGV...UGV) is through real- time visual odometry. To navigate in such an environment, the UGV needs to be able to detect, identify, and relate the static
O'Connor, Constance M; Reddon, Adam R; Odetunde, Aderinsola; Jindal, Shagun; Balshine, Sigal
2015-12-01
Predation is one of the primary drivers of fitness for prey species. Therefore, there should be strong selection for accurate assessment of predation risk, and whenever possible, individuals should use all available information to fine-tune their response to the current threat of predation. Here, we used a controlled laboratory experiment to assess the responses of individual Neolamprologus pulcher, a social cichlid fish, to a live predator stimulus, to the odour of damaged conspecifics, or to both indicators of predation risk combined. We found that fish in the presence of the visual predator stimulus showed typical antipredator behaviour. Namely, these fish decreased activity and exploration, spent more time seeking shelter, and more time near conspecifics. Surprisingly, there was no effect of the chemical cue alone, and fish showed a reduced response to the combination of the visual predator stimulus and the odour of damaged conspecifics relative to the visual predator stimulus alone. These results demonstrate that N. pulcher adjust their anti-predator behaviour to the information available about current predation risk, and we suggest a possible role for the use of social information in the assessment of predation risk in a cooperatively breeding fish. Copyright © 2015. Published by Elsevier B.V.
Functional visual acuity in patients with successfully treated amblyopia: a pilot study.
Hoshi, Sujin; Hiraoka, Takahiro; Kotsuka, Junko; Sato, Yumiko; Izumida, Shinya; Kato, Atsuko; Ueno, Yuta; Fukuda, Shinichi; Oshika, Tetsuro
2017-06-01
The aim of this study was to use conventional visual acuity measurements to quantify the functional visual acuity (FVA) in eyes with successfully treated amblyopia, and to compare the findings with those for contralateral normal eyes. Nineteen patients (7 boys, 12 girls; age 7.5 ± 2.2 years) with successfully treated unilateral amblyopia and the same conventional decimal visual acuity in both eyes (better than 1.0) were enrolled. FVA, the visual maintenance ratio (VMR), maximum and minimum visual acuity, and the average response time were recorded for both eyes of all patients using an FVA measurement system. The differences in FVA values between eyes were analyzed. The mean LogMAR FVA scores, VMR (p < 0.001 for both), and the LogMAR maximum (p < 0.005) and minimum visual acuity (p < 0.001) were significantly poorer for the eyes with treated amblyopia than for the contralateral normal eyes. There was no significant difference in the average response time. Our results indicate that FVA and VMR were poorer for eyes with treated amblyopia than for normal eyes, even though the treatment for amblyopia was considered successful on the basis of conventional visual acuity measurements. These results suggest that visual function is impaired in eyes with amblyopia, regardless of treatment success, and that FVA measurements can provide highly valuable diagnosis and treatment information that is not readily provided by conventional visual acuity measurements.
Age-related slowing of response selection and production in a visual choice reaction time task
Woods, David L.; Wyma, John M.; Yund, E. William; Herron, Timothy J.; Reed, Bruce
2015-01-01
Aging is associated with delayed processing in choice reaction time (CRT) tasks, but the processing stages most impacted by aging have not been clearly identified. Here, we analyzed CRT latencies in a computerized serial visual feature-conjunction task. Participants responded to a target letter (probability 40%) by pressing one mouse button, and responded to distractor letters differing either in color, shape, or both features from the target (probabilities 20% each) by pressing the other mouse button. Stimuli were presented randomly to the left and right visual fields and stimulus onset asynchronies (SOAs) were adaptively reduced following correct responses using a staircase procedure. In Experiment 1, we tested 1466 participants who ranged in age from 18 to 65 years. CRT latencies increased significantly with age (r = 0.47, 2.80 ms/year). Central processing time (CPT), isolated by subtracting simple reaction times (SRT) (obtained in a companion experiment performed on the same day) from CRT latencies, accounted for more than 80% of age-related CRT slowing, with most of the remaining increase in latency due to slowed motor responses. Participants were faster and more accurate when the stimulus location was spatially compatible with the mouse button used for responding, and this effect increased slightly with age. Participants took longer to respond to distractors with target color or shape than to distractors with no target features. However, the additional time needed to discriminate the more target-like distractors did not increase with age. In Experiment 2, we replicated the findings of Experiment 1 in a second population of 178 participants (ages 18–82 years). CRT latencies did not differ significantly in the two experiments, and similar effects of age, distractor similarity, and stimulus-response spatial compatibility were found. The results suggest that the age-related slowing in visual CRT latencies is largely due to delays in response selection and production. PMID:25954175
A Method to Quantify Visual Information Processing in Children Using Eye Tracking
Kooiker, Marlou J.G.; Pel, Johan J.M.; van der Steen-Kant, Sanny P.; van der Steen, Johannes
2016-01-01
Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child. PMID:27500922
A Method to Quantify Visual Information Processing in Children Using Eye Tracking.
Kooiker, Marlou J G; Pel, Johan J M; van der Steen-Kant, Sanny P; van der Steen, Johannes
2016-07-09
Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child.
Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M
2006-10-25
The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.
Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.
2009-01-01
The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454
A New Perspective on Visual Word Processing Efficiency
Houpt, Joseph W.; Townsend, James T.; Donkin, Christopher
2013-01-01
As a fundamental part of our daily lives, visual word processing has received much attention in the psychological literature. Despite the well established advantage of perceiving letters in a word or in a pseudoword over letters alone or in random sequences using accuracy, a comparable effect using response times has been elusive. Some researchers continue to question whether the advantage due to word context is perceptual. We use the capacity coefficient, a well established, response time based measure of efficiency to provide evidence of word processing as a particularly efficient perceptual process to complement those results from the accuracy domain. PMID:24334151
Neurolinguistic Programming Examined: Imagery, Sensory Mode, and Communication.
ERIC Educational Resources Information Center
Fromme, Donald K.; Daniell, Jennifer
1984-01-01
Tested Neurolinguistic Programming (NLP) assumptions by examining intercorrelations among response times of students (N=64) for extracting visual, auditory, and kinesthetic information from alphabetic images. Large positive intercorrelations were obtained, the only outcome not compatible with NLP. Good visualizers were significantly better in…
Individual differences in attention influence perceptual decision making.
Nunez, Michael D; Srinivasan, Ramesh; Vandekerckhove, Joachim
2015-01-01
Sequential sampling decision-making models have been successful in accounting for reaction time (RT) and accuracy data in two-alternative forced choice tasks. These models have been used to describe the behavior of populations of participants, and explanatory structures have been proposed to account for between individual variability in model parameters. In this study we show that individual differences in behavior from a novel perceptual decision making task can be attributed to (1) differences in evidence accumulation rates, (2) differences in variability of evidence accumulation within trials, and (3) differences in non-decision times across individuals. Using electroencephalography (EEG), we demonstrate that these differences in cognitive variables, in turn, can be explained by attentional differences as measured by phase-locking of steady-state visual evoked potential (SSVEP) responses to the signal and noise components of the visual stimulus. Parameters of a cognitive model (a diffusion model) were obtained from accuracy and RT distributions and related to phase-locking indices (PLIs) of SSVEPs with a single step in a hierarchical Bayesian framework. Participants who were able to suppress the SSVEP response to visual noise in high frequency bands were able to accumulate correct evidence faster and had shorter non-decision times (preprocessing or motor response times), leading to more accurate responses and faster response times. We show that the combination of cognitive modeling and neural data in a hierarchical Bayesian framework relates physiological processes to the cognitive processes of participants, and that a model with a new (out-of-sample) participant's neural data can predict that participant's behavior more accurately than models without physiological data.
Robotic Attention Processing And Its Application To Visual Guidance
NASA Astrophysics Data System (ADS)
Barth, Matthew; Inoue, Hirochika
1988-03-01
This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2016-01-01
Evidence indicates that individuals with Williams syndrome (WS) exhibit atypical attentional characteristics when viewing faces. However, the dynamics of visual attention captured by faces remain unclear, especially when explicit attentional forces are present. To clarify this, we introduced a visual search paradigm and assessed how the relative strength of visual attention captured by a face and explicit attentional control changes as search progresses. Participants (WS and controls) searched for a target (butterfly) within an array of distractors, which sometimes contained an upright face. We analyzed reaction time and location of the first fixation-which reflect the attentional profile at the initial stage-and fixation durations. These features represent aspects of attention at later stages of visual search. The strength of visual attention captured by faces and explicit attentional control (toward the butterfly) was characterized by the frequency of first fixations on a face or butterfly and on the duration of face or butterfly fixations. Although reaction time was longer in all groups when faces were present, and visual attention was not dominated by faces in any group during the initial stages of the search, when faces were present, attention to faces dominated in the WS group during the later search stages. Furthermore, for the WS group, reaction time correlated with eye-movement measures at different stages of searching such that longer reaction times were associated with longer face-fixations, specifically at the initial stage of searching. Moreover, longer reaction times were associated with longer face-fixations at the later stages of searching, while shorter reaction times were associated with longer butterfly fixations. The relative strength of attention captured by faces in people with WS is not observed at the initial stage of searching but becomes dominant as the search progresses. Furthermore, although behavioral responses are associated with some aspects of eye movements, they are not as sensitive as eye-movement measurements themselves at detecting atypical attentional characteristics in people with WS.
Visual responses of corn silk flies (Diptera: Ulidiidae)
USDA-ARS?s Scientific Manuscript database
Corn silk flies are major pests impacting fresh market sweet corn production in Florida and Georgia. Control depends solely on well-times applications of insecticides to protect corn ear development. Surveillance depends on visual inspection of ears with no effective trapping methods currently ava...
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
2016-02-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
2015-01-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
Repetition priming of face recognition in a serial choice reaction-time task.
Roberts, T; Bruce, V
1989-05-01
Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.
Direct visuomotor mapping for fast visually-evoked arm movements.
Reynolds, Raymond F; Day, Brian L
2012-12-01
In contrast to conventional reaction time (RT) tasks, saccadic RT's to visual targets are very fast and unaffected by the number of possible targets. This can be explained by the sub-cortical circuitry underlying eye movements, which involves direct mapping between retinal input and motor output in the superior colliculus. Here we asked if the choice-invariance established for the eyes also applies to a special class of fast visuomotor responses of the upper limb. Using a target-pointing paradigm we observed very fast reaction times (<150 ms) which were completely unaffected as the number of possible target choices was increased from 1 to 4. When we introduced a condition of altered stimulus-response mapping, RT went up and a cost of choice was observed. These results can be explained by direct mapping between visual input and motor output, compatible with a sub-cortical pathway for visual control of the upper limb. Copyright © 2012 Elsevier Ltd. All rights reserved.
Exploring conflict- and target-related movement of visual attention.
Wendt, Mike; Garling, Marco; Luna-Rodriguez, Aquiles; Jacobsen, Thomas
2014-01-01
Intermixing trials of a visual search task with trials of a modified flanker task, the authors investigated whether the presentation of conflicting distractors at only one side (left or right) of a target stimulus triggers shifts of visual attention towards the contralateral side. Search time patterns provided evidence for lateral attention shifts only when participants performed the flanker task under an instruction assumed to widen the focus of attention, demonstrating that instruction-based control settings of an otherwise identical task can impact performance in an unrelated task. Contrasting conditions with response-related and response-unrelated distractors showed that shifting attention does not depend on response conflict and may be explained as stimulus-conflict-related withdrawal or target-related deployment of attention.
Submillisecond unmasked subliminal visual stimuli evoke electrical brain responses.
Sperdin, Holger F; Spierer, Lucas; Becker, Robert; Michel, Christoph M; Landis, Theodor
2015-04-01
Subliminal perception is strongly associated to the processing of meaningful or emotional information and has mostly been studied using visual masking. In this study, we used high density 256-channel EEG coupled with an liquid crystal display (LCD) tachistoscope to characterize the spatio-temporal dynamics of the brain response to visual checkerboard stimuli (Experiment 1) or blank stimuli (Experiment 2) presented without a mask for 1 ms (visible), 500 µs (partially visible), and 250 µs (subliminal) by applying time-wise, assumption-free nonparametric randomization statistics on the strength and on the topography of high-density scalp-recorded electric field. Stimulus visibility was assessed in a third separate behavioral experiment. Results revealed that unmasked checkerboards presented subliminally for 250 µs evoked weak but detectable visual evoked potential (VEP) responses. When the checkerboards were replaced by blank stimuli, there was no evidence for the presence of an evoked response anymore. Furthermore, the checkerboard VEPs were modulated topographically between 243 and 296 ms post-stimulus onset as a function of stimulus duration, indicative of the engagement of distinct configuration of active brain networks. A distributed electrical source analysis localized this modulation within the right superior parietal lobule near the precuneus. These results show the presence of a brain response to submillisecond unmasked subliminal visual stimuli independently of their emotional saliency or meaningfulness and opens an avenue for new investigations of subliminal stimulation without using visual masking. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muratore, Joseph J., Jr.; Heineck, James T.
1993-01-01
Time and flow-direction responses of shearstress-sensitive liquid crystal coatings were explored experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing Schlieren system and recorded with a 1000 frame/sec color video camera. Liquid crystal responses to these changing-shear environments were then recorded with the same video system, documenting color-play response times equal to, or faster than, the time interval between sequential frames (i.e., 1 millisecond). For the flow-direction experiments, a planar test surface was exposed to equal-magnitude and known-direction surface shear stresses generated by both normal and tangential subsonic jet-impingement flows. Under shear, the sense of the angular displacement of the liquid crystal dispersed (reflected) spectrum was found to be a function of the instantaneous direction of the applied shear. This technique thus renders dynamic flow reversals or flow divergences visible over entire test surfaces at image recording rates up to 1 KHz. Extensions of the technique to visualize relatively small changes in surface shear stress direction appear feasible.
VisualEyes: a modular software system for oculomotor experimentation.
Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L
2011-03-25
Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.
ERIC Educational Resources Information Center
Mitchell, Claudia
2008-01-01
At the risk of seeming to make exaggerated claims for visual methodologies, what I set out to do is lay bare some of the key elements of working with the visual as a set of methodologies and practices. In particular, I address educational research in South Africa at a time when questions of the social responsibility of the academic researcher…
Time delays in flight simulator visual displays
NASA Technical Reports Server (NTRS)
Crane, D. F.
1980-01-01
It is pointed out that the effects of delays of less than 100 msec in visual displays on pilot dynamic response and system performance are of particular interest at this time because improvements in the latest computer-generated imagery (CGI) systems are expected to reduce CGI displays delays to this range. Attention is given to data which quantify the effects of display delays in the range of 0-100 msec on system stability and performance, and pilot dynamic response for a particular choice of aircraft dynamics, display, controller, and task. The conventional control system design methods are reviewed, the pilot response data presented, and data for long delays, all suggest lead filter compensation of display delay. Pilot-aircraft system crossover frequency information guides compensation filter specification.
Visual search performance among persons with schizophrenia as a function of target eccentricity.
Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M
2010-03-01
The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved
Song, Inkyung; Keil, Andreas
2015-01-01
Neutral cues, after being reliably paired with noxious events, prompt defensive engagement and amplified sensory responses. To examine the neurophysiology underlying these adaptive changes, we quantified the contrast-response function of visual cortical population activity during differential aversive conditioning. Steady-state visual evoked potentials (ssVEPs) were recorded while participants discriminated the orientation of rapidly flickering grating stimuli. During each trial, luminance contrast of the gratings was slowly increased and then decreased. Right-tilted gratings (CS+) were paired with loud white noise but left-tilted gratings (CS−) were not. The contrast-following waveform envelope of ssVEPs showed selective amplification of the CS+ only during the high-contrast stage of the viewing epoch. Findings support the notion that motivational relevance, learned in a time frame of minutes, affects vision through a response gain mechanism. PMID:24981277
Common neural substrates for visual working memory and attention.
Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J
2007-06-01
Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.
"Visual" Cortex Responds to Spoken Language in Blind Children.
Bedny, Marina; Richardson, Hilary; Saxe, Rebecca
2015-08-19
Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.
Owsley, Cynthia
2013-09-20
Older adults commonly report difficulties in visual tasks of everyday living that involve visual clutter, secondary task demands, and time sensitive responses. These difficulties often cannot be attributed to visual sensory impairment. Techniques for measuring visual processing speed under divided attention conditions and among visual distractors have been developed and have established construct validity in that those older adults performing poorly in these tests are more likely to exhibit daily visual task performance problems. Research suggests that computer-based training exercises can increase visual processing speed in older adults and that these gains transfer to enhancement of health and functioning and a slowing in functional and health decline as people grow older. Copyright © 2012 Elsevier Ltd. All rights reserved.
Action video game training reduces the Simon Effect.
Hutchinson, Claire V; Barrett, Doug J K; Nitka, Aleksander; Raynes, Kerry
2016-04-01
A number of studies have shown that training on action video games improves various aspects of visual cognition including selective attention and inhibitory control. Here, we demonstrate that action video game play can also reduce the Simon Effect, and, hence, may have the potential to improve response selection during the planning and execution of goal-directed action. Non-game-players were randomly assigned to one of four groups; two trained on a first-person-shooter game (Call of Duty) on either Microsoft Xbox or Nintendo DS, one trained on a visual training game for Nintendo DS, and a control group who received no training. Response times were used to contrast performance before and after training on a behavioral assay designed to manipulate stimulus-response compatibility (the Simon Task). The results revealed significantly faster response times and a reduced cost of stimulus-response incompatibility in the groups trained on the first-person-shooter game. No benefit of training was observed in the control group or the group trained on the visual training game. These findings are consistent with previous evidence that action game play elicits plastic changes in the neural circuits that serve attentional control, and suggest training may facilitate goal-directed action by improving players' ability to resolve conflict during response selection and execution.
Exploration of complex visual feature spaces for object perception
Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.
2014-01-01
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408
Sigurdardottir, Heida M.; Sheinberg, David L.
2015-01-01
The lateral intraparietal area (LIP) of the dorsal visual stream is thought to play an important role in visually directed orienting, or the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand how and to what extent short-term and long-term experience with visual orienting can determine the nature of responses of LIP neurons to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred peripheral spatial location of a neuron. For some objects the training lasted for less than a single day, while for other objects the training lasted for several months. We found that neural responses to visual objects are affected both by such short-term and long-term experience, but that the length of the learning period determines exactly how this neural plasticity manifests itself. Short-term learning over the course of a single training session affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the neural responses to newly learned objects start to resemble those of familiar over-learned objects that share their meaning or arbitrary association. Long-term learning, on the other hand, affects the earliest and apparently bottom-up responses to visual objects. These responses tend to be greater for objects that have repeatedly been associated with looking toward, rather than away from, LIP neurons’ preferred spatial locations. Responses to objects can nonetheless be distinct even though the objects have both been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore also indicate that a complete experience-driven override of LIP object responses is difficult or impossible. PMID:25633647
Lönnstedt, Oona M; Munday, Philip L; McCormick, Mark I; Ferrari, Maud C O; Chivers, Douglas P
2013-09-01
Carbon dioxide (CO2) levels in the atmosphere and surface ocean are rising at an unprecedented rate due to sustained and accelerating anthropogenic CO2 emissions. Previous studies have documented that exposure to elevated CO2 causes impaired antipredator behavior by coral reef fish in response to chemical cues associated with predation. However, whether ocean acidification will impair visual recognition of common predators is currently unknown. This study examined whether sensory compensation in the presence of multiple sensory cues could reduce the impacts of ocean acidification on antipredator responses. When exposed to seawater enriched with levels of CO2 predicted for the end of this century (880 μatm CO2), prey fish completely lost their response to conspecific alarm cues. While the visual response to a predator was also affected by high CO2, it was not entirely lost. Fish exposed to elevated CO2, spent less time in shelter than current-day controls and did not exhibit antipredator signaling behavior (bobbing) when multiple predator cues were present. They did, however, reduce feeding rate and activity levels to the same level as controls. The results suggest that the response of fish to visual cues may partially compensate for the lack of response to chemical cues. Fish subjected to elevated CO2 levels, and exposed to chemical and visual predation cues simultaneously, responded with the same intensity as controls exposed to visual cues alone. However, these responses were still less than control fish simultaneously exposed to chemical and visual predation cues. Consequently, visual cues improve antipredator behavior of CO2 exposed fish, but do not fully compensate for the loss of response to chemical cues. The reduced ability to correctly respond to a predator will have ramifications for survival in encounters with predators in the field, which could have repercussions for population replenishment in acidified oceans.
A real-time plantar pressure feedback device for foot unloading.
Femery, Virginie G; Moretto, Pierre G; Hespel, Jean-Michel G; Thévenon, André; Lensel, Ghislaine
2004-10-01
To develop and test a plantar pressure control device that provides both visual and auditory feedback and is suitable for correcting plantar pressure distribution patterns in persons susceptible to neuropathic foot ulceration. Pilot test. Sports medicine laboratory in a university in France. One healthy man in his mid thirties. Not applicable. Main outcome measures A device was developed based on real-time feedback, incorporating an acoustic alarm and visual signals, adjusted to a specific pressure load. Plantar pressure measured during walking, at 6 sensor locations over 27 steps under 2 different conditions: (1) natural and (2) unloaded in response to device feedback. The subject was able to modify his gait in response to the auditory and visual signals. He did not compensate for the decrease of peak pressure under the first metarsal by increasing the duration of the load shift under this area. Gait pattern modification centered on a mediolateral load shift. The auditory signal provided a warning system alerting the user to potentially harmful plantar pressures. The visual signal warned of the degree of pressure. People who have lost nociceptive perception, as in cases of diabetic neuropathy, may be able to change their walking pattern in response to the feedback provided by this device. The visual may have diagnostic value in determining plantar pressures in such patients. This pilot test indicates that further studies are warranted.
Nanda, U; Eisen, S; Zadeh, R S; Owen, D
2011-06-01
There is a growing body of evidence on the impact of the environment on health and well-being. This study focuses on the impact of visual artworks on the well-being of psychiatric patients in a multi-purpose lounge of an acute care psychiatric unit. Well-being was measured by the rate of pro re nata (PRN) medication issued by nurses in response to visible signs of patient anxiety and agitation. Nurses were interviewed to get qualitative feedback on the patient response. Findings revealed that the ratio of PRN/patient census was significantly lower on the days when a realistic nature photograph was displayed, compared to the control condition (no art) and abstract art. Nurses reported that some patients displayed agitated behaviour in response to the abstract image. This study makes a case for the impact of visual art on mental well-being. The research findings were also translated into the time and money invested on PRN incidents, and annual cost savings of almost $US30,000 a year was projected. This research makes a case that simple environmental interventions like visual art can save the hospital costs of medication, and staff and pharmacy time, by providing a visual distraction that can alleviate anxiety and agitation in patients. © 2010 Blackwell Publishing.
Chromatic and Achromatic Spatial Resolution of Local Field Potentials in Awake Cortex.
Jansen, Michael; Li, Xiaobing; Lashgari, Reza; Kremkow, Jens; Bereshpolova, Yulia; Swadlow, Harvey A; Zaidi, Qasim; Alonso, Jose-Manuel
2015-10-01
Local field potentials (LFPs) have become an important measure of neuronal population activity in the brain and could provide robust signals to guide the implant of visual cortical prosthesis in the future. However, it remains unclear whether LFPs can detect weak cortical responses (e.g., cortical responses to equiluminant color) and whether they have enough visual spatial resolution to distinguish different chromatic and achromatic stimulus patterns. By recording from awake behaving macaques in primary visual cortex, here we demonstrate that LFPs respond robustly to pure chromatic stimuli and exhibit ∼2.5 times lower spatial resolution for chromatic than achromatic stimulus patterns, a value that resembles the ratio of achromatic/chromatic resolution measured with psychophysical experiments in humans. We also show that, although the spatial resolution of LFP decays with visual eccentricity as is also the case for single neurons, LFPs have higher spatial resolution and show weaker response suppression to low spatial frequencies than spiking multiunit activity. These results indicate that LFP recordings are an excellent approach to measure spatial resolution from local populations of neurons in visual cortex including those responsive to color. © The Author 2014. Published by Oxford University Press.
Maddock, Richard J; Buonocore, Michael H; Lavoie, Shawn P; Copeland, Linda E; Kile, Shawn J; Richards, Anne L; Ryan, John M
2006-11-22
Proton magnetic resonance spectroscopy ((1)H-MRS) studies showing increased lactate during neural activation support a broader role for lactate in brain energy metabolism than was traditionally recognized. Proton MRS measures of brain lactate responses have been used to study regional brain metabolism in clinical populations. This study examined whether variations in blood glucose influence the lactate response to visual stimulation in the visual cortex. Six subjects were scanned twice, receiving either saline or 21% glucose intravenously. Using (1)H-MRS at 1.5 Tesla with a long echo time (TE=288 ms), the lactate doublet was visible at 1.32 ppm in the visual cortex of all subjects. Lactate increased significantly from resting to visual stimulation. Hyperglycemia had no effect on this increase. The order of the slice-selective gradients for defining the spectroscopy voxel had a pronounced effect on the extent of contamination by signal originating outside the voxel. The results of this preliminary study demonstrate a method for observing a consistent activity-stimulated increase in brain lactate at 1.5 T and show that variations in blood glucose across the normal range have little effect on this response.
Oscillatory frontal theta responses are increased upon bisensory stimulation.
Sakowitz, O W; Schürmann, M; Başar, E
2000-05-01
To investigate the functional correlation of oscillatory EEG components with the interaction of sensory modalities following simultaneous audio-visual stimulation. In an experimental study (15 subjects) we compared auditory evoked potentials (AEPs) and visual evoked potentials (VEPs) to bimodal evoked potentials (BEPs; simultaneous auditory and visual stimulation). BEPs were assumed to be brain responses to complex stimuli as a marker for intermodal associative functioning. Frequency domain analysis of these EPs showed marked theta-range components in response to bimodal stimulation. These theta components could not be explained by linear addition of the unimodal responses in the time domain. Considering topography the increased theta-response showed a remarkable frontality in proximity to multimodal association cortices. Referring to methodology we try to demonstrate that, even if various behavioral correlates of brain oscillations exist, common patterns can be extracted by means of a systems-theoretical approach. Serving as an example of functionally relevant brain oscillations, theta responses could be interpreted as an indicator of associative information processing.
Visualizing multiattribute Web transactions using a freeze technique
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj
2003-05-01
Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.
Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale
2015-10-01
Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.
Inhibition-of-return at multiple locations in visual space.
Wright, R D; Richard, C M
1996-09-01
Inhibition-of-return is thought to be a visual search phenomenon characterized by delayed responses to targets presented at recently cued or recently fixated locations. We studied this inhibition effect following the simultaneous presentation of multiple location cues. The results indicated that response inhibition can be associated with as many as four locations at the same time. This suggests that a purely oculomotor account of inhibition-of-return is oversimplified. In short, although oculomotor processes appear to play a role in inhibition-of-return they may not tell the whole story about how it occurs because we can only program and execute eye movements to one location at a time.
Cognitive Load in Voice Therapy Carry-Over Exercises.
Iwarsson, Jenny; Morris, David Jackson; Balling, Laura Winther
2017-01-01
The cognitive load generated by online speech production may vary with the nature of the speech task. This article examines 3 speech tasks used in voice therapy carry-over exercises, in which a patient is required to adopt and automatize new voice behaviors, ultimately in daily spontaneous communication. Twelve subjects produced speech in 3 conditions: rote speech (weekdays), sentences in a set form, and semispontaneous speech. Subjects simultaneously performed a secondary visual discrimination task for which response times were measured. On completion of each speech task, subjects rated their experience on a questionnaire. Response times from the secondary, visual task were found to be shortest for the rote speech, longer for the semispontaneous speech, and longest for the sentences within the set framework. Principal components derived from the subjective ratings were found to be linked to response times on the secondary visual task. Acoustic measures reflecting fundamental frequency distribution and vocal fold compression varied across the speech tasks. The results indicate that consideration should be given to the selection of speech tasks during the process leading to automation of revised speech behavior and that self-reports may be a reliable index of cognitive load.
NASA Technical Reports Server (NTRS)
Allen, R. W.; Jex, H. R.
1972-01-01
In order to test various components of a regenerative life support system and to obtain data on the physiological and psychological effects of long-duration exposure to confinement in a space station atmosphere, four carefully screened young men were sealed in space station simulator for 90 days. A tracking test battery was administered during the above experiment. The battery included a clinical test (critical instability task) related to the subject's dynamic time delay, and a conventional steady tracking task, during which dynamic response (describing functions) and performance measures were obtained. Good correlation was noted between the clinical critical instability scores and more detailed tracking parameters such as dynamic time delay and gain-crossover frequency. The comprehensive data base on human operator tracking behavior obtained in this study demonstrate that sophisticated visual-motor response properties can be efficiently and reliably measured over extended periods of time.
ERIC Educational Resources Information Center
Gawryszewski, Luiz G.; Carreiro, Luiz Renato R.; Magalhaes, Fabio V.
2005-01-01
A non-informative cue (C) elicits an inhibition of manual reaction time (MRT) to a visual target (T). We report an experiment to examine if the spatial distribution of this inhibitory effect follows Polar or Cartesian coordinate systems. C appeared at one out of 8 isoeccentric (7[degrees]) positions, the C-T angular distances (in polar…
Maximizing Impact: Pairing interactive web visualizations with traditional print media
NASA Astrophysics Data System (ADS)
Read, E. K.; Appling, A.; Carr, L.; De Cicco, L.; Read, J. S.; Walker, J. I.; Winslow, L. A.
2016-12-01
Our Nation's rapidly growing store of environmental data makes new demands on researchers: to take on increasingly broad-scale, societally relevant analyses and to rapidly communicate findings to the public. Interactive web-based data visualizations now commonly supplement or comprise journalism, and science journalism has followed suit. To maximize the impact of US Geological Survey (USGS) science, the USGS Office of Water Information Data Science team builds tools and products that combine traditional static research products (e.g., print journal articles) with web-based, interactive data visualizations that target non-scientific audiences. We developed a lightweight, open-source framework for web visualizations to reduce time to production. The framework provides templates for a data visualization workflow and the packaging of text, interactive figures, and images into an appealing web interface with standardized look and feel, usage tracking, and responsiveness. By partnering with subject matter experts to focus on timely, societally relevant issues, we use these tools to produce appealing visual stories targeting specific audiences, including managers, the general public, and scientists, on diverse topics including drought, microplastic pollution, and fisheries response to climate change. We will describe the collaborative and technical methodologies used; describe some examples of how it's worked; and challenges and opportunities for the future.
Improving Target Detection in Visual Search Through the Augmenting Multi-Sensory Cues
2013-01-01
target detection, visual search James Merlo, Joseph E. Mercado , Jan B.F. Van Erp, Peter A. Hancock University of Central Florida 12201 Research Parkway...were controlled by a purpose-created, LabView- based software computer program that synchronised the respective displays and recorded response times and
Explaining the Colavita visual dominance effect.
Spence, Charles
2009-01-01
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.
Audio-visual synchrony and feature-selective attention co-amplify early visual processing.
Keitel, Christian; Müller, Matthias M
2016-05-01
Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
The VERRUN and VERNAL software systems for steady-state visual evoked response experimentation
NASA Technical Reports Server (NTRS)
Levison, W. H.; Zacharias, G. L.
1984-01-01
Two digital computer programs were developed for use in experiments involving steady-state visual evoked response (VER): VERRUN, whose primary functions are to generate a sum-of-sines (SOS) stimulus and to digitize and store electro-cortical response; and VERNAL, which provides both time- and frequency-domain metrics of the evoked response. These programs were coded in FORTRAN for operation on the PDP-11/34, using the RSX-11 Operating System, and the PDP-11/23, using the RT-11 Operating System. Users' and programmers' guides to these programs are provided, and guidelines for model analysis of VER data are suggested.
NASA Astrophysics Data System (ADS)
Li, W.; Shao, H.
2017-12-01
For geospatial cyberinfrastructure enabled web services, the ability of rapidly transmitting and sharing spatial data over the Internet plays a critical role to meet the demands of real-time change detection, response and decision-making. Especially for the vector datasets which serve as irreplaceable and concrete material in data-driven geospatial applications, their rich geometry and property information facilitates the development of interactive, efficient and intelligent data analysis and visualization applications. However, the big-data issues of vector datasets have hindered their wide adoption in web services. In this research, we propose a comprehensive optimization strategy to enhance the performance of vector data transmitting and processing. This strategy combines: 1) pre- and on-the-fly generalization, which automatically determines proper simplification level through the introduction of appropriate distance tolerance (ADT) to meet various visualization requirements, and at the same time speed up simplification efficiency; 2) a progressive attribute transmission method to reduce data size and therefore the service response time; 3) compressed data transmission and dynamic adoption of a compression method to maximize the service efficiency under different computing and network environments. A cyberinfrastructure web portal was developed for implementing the proposed technologies. After applying our optimization strategies, substantial performance enhancement is achieved. We expect this work to widen the use of web service providing vector data to support real-time spatial feature sharing, visual analytics and decision-making.
Nunez, Michael D.; Vandekerckhove, Joachim; Srinivasan, Ramesh
2016-01-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects. PMID:28435173
Nunez, Michael D; Vandekerckhove, Joachim; Srinivasan, Ramesh
2017-02-01
Perceptual decision making can be accounted for by drift-diffusion models, a class of decision-making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift-diffusion model produces evidence accumulation rate and non-decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per-trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task-relevant signal stimulus. Single-trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single-trial evidence accumulation and preprocessing times. Within-trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single-trial measures of attention lead to better out-of-sample predictions of accuracy and correct reaction time distributions for individual subjects.
Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition
ERIC Educational Resources Information Center
Yap, Melvin J.; Balota, David A.
2007-01-01
Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…
Task modulation of the effects of brightness on reaction time and response force.
Jaśkowski, Piotr; Włodarczyk, Dariusz
2006-08-01
Van der Molen and Keuss [van der Molen, M.W., Keuss, P.J.G., 1979. The relationship between reaction time and intensity in discrete auditory tasks. Quarterly Journal of Experimental Psychology 31, 95-102; van der Molen, M.W., Keuss, P.J.G., 1981. Response selection and the processing of auditory intensity. Quarterly Journal of Experimental Psychology 33, 177-184] showed that paradoxically long reaction times (RT) occur with extremely loud auditory stimuli when the task is difficult (e.g. needs a response choice). It was argued that this paradoxical behavior of RT is due to active suppression of response prompting to prevent false responses. In the present experiments, we demonstrated that such an effect can also occur for visual stimuli provided that they are large enough. Additionally, we showed that response force exerted by participants on response keys monotonically grew with intensity for large stimuli but was independent of intensity for small visual stimuli. Bearing in mind that only large stimuli are believed to be arousing this pattern of results supports the arousal interpretation of the negative effect of loud stimuli on RT given by van der Molen and Keuss.
Sigurdardottir, Heida M; Sheinberg, David L
2015-07-01
The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom-up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.
Visual search accelerates during adolescence.
Burggraaf, Rudolf; van der Geest, Jos N; Frens, Maarten A; Hooge, Ignace T C
2018-05-01
We studied changes in visual-search performance and behavior during adolescence. Search performance was analyzed in terms of reaction time and response accuracy. Search behavior was analyzed in terms of the objects fixated and the duration of these fixations. A large group of adolescents (N = 140; age: 12-19 years; 47% female, 53% male) participated in a visual-search experiment in which their eye movements were recorded with an eye tracker. The experiment consisted of 144 trials (50% with a target present), and participants had to decide whether a target was present. Each trial showed a search display with 36 Gabor patches placed on a hexagonal grid. The target was a vertically oriented element with a high spatial frequency. Nontargets differed from the target in spatial frequency, orientation, or both. Search performance and behavior changed during adolescence; with increasing age, fixation duration and reaction time decreased. Response accuracy, number of fixations, and selection of elements to fixate upon did not change with age. Thus, the speed of foveal discrimination increases with age, while the efficiency of peripheral selection does not change. We conclude that the way visual information is gathered does not change during adolescence, but the processing of visual information becomes faster.
Xue, Qingwan; Markkula, Gustav; Yan, Xuedong; Merat, Natasha
2018-06-18
Previous studies have shown the effect of a lead vehicle's speed, deceleration rate and headway distance on drivers' brake response times. However, how drivers perceive this information and use it to determine when to apply braking is still not quite clear. To better understand the underlying mechanisms, a driving simulator experiment was performed where each participant experienced nine deceleration scenarios. Previously reported effects of the lead vehicle's speed, deceleration rate and headway distance on brake response time were firstly verified in this paper, using a multilevel model. Then, as an alternative to measures of speed, deceleration rate and distance, two visual looming-based metrics (angular expansion rate θ˙ of the lead vehicle on the driver's retina, and inverse tau τ -1 , the ratio between θ˙ and the optical size θ), considered to be more in line with typical human psycho-perceptual responses, were adopted to quantify situation urgency. These metrics were used in two previously proposed mechanistic models predicting brake onset: either when looming surpasses a threshold, or when the accumulated evidence (looming and other cues) reaches a threshold. Results showed that the looming threshold model did not capture the distribution of brake response time. However, regardless of looming metric, the accumulator models fitted the distribution of brake response times better than the pure threshold models. Accumulator models, including brake lights, provided a better model fit than looming-only versions. For all versions of the mechanistic models, models using τ -1 as the measure of looming fitted better than those using θ˙, indicating that the visual cues drivers used during rear-end collision avoidance may be more close to τ -1 . Copyright © 2018 Elsevier Ltd. All rights reserved.
Martin, Thomas J.; Grigg, Amanda; Kim, Susy A.; Ririe, Douglas G.; Eisenach, James C.
2014-01-01
Background The 5 choice serial reaction time task (5CSRTT) is commonly used to assess attention in rodents. We sought to develop a variant of the 5CSRTT that would speed training to objective success criteria, and to test whether this variant could determine attention capability in each subject. New Method Fisher 344 rats were trained to perform a variant of the 5CSRTT in which the duration of visual cue presentation (cue duration) was titrated between trials based upon performance. The cue duration was decreased when the subject made a correct response, or increased with incorrect responses or omissions. Additionally, test day challenges were provided consisting of lengthening the intertrial interval and inclusion of a visual distracting stimulus. Results Rats readily titrated the cue duration to less than 1 sec in 25 training sessions or less (mean ± SEM, 22.9 ± 0.7), and the median cue duration (MCD) was calculated as a measure of attention threshold. Increasing the intertrial interval increased premature responses, decreased the number of trials completed, and increased the MCD. Decreasing the intertrial interval and time allotted for consuming the food reward demonstrated that a minimum of 3.5 sec is required for rats to consume two food pellets and successfully attend to the next trial. Visual distraction in the form of a 3 Hz flashing light increased the MCD and both premature and time out responses. Comparison with existing method The titration variant of the 5CSRTT is a useful method that dynamically measures attention threshold across a wide range of subject performance, and significantly decreases the time required for training. Task challenges produce similar effects in the titration method as reported for the classical procedure. Conclusions The titration 5CSRTT method is an efficient training procedure for assessing attention and can be utilized to assess the limit in performance ability across subjects and various schedule manipulations. PMID:25528113
Variability and Correlations in Primary Visual Cortical Neurons Driven by Fixational Eye Movements
McFarland, James M.; Cumming, Bruce G.
2016-01-01
The ability to distinguish between elements of a sensory neuron's activity that are stimulus independent versus driven by the stimulus is critical for addressing many questions in systems neuroscience. This is typically accomplished by measuring neural responses to repeated presentations of identical stimuli and identifying the trial-variable components of the response as noise. In awake primates, however, small “fixational” eye movements (FEMs) introduce uncontrolled trial-to-trial differences in the visual stimulus itself, potentially confounding this distinction. Here, we describe novel analytical methods that directly quantify the stimulus-driven and stimulus-independent components of visual neuron responses in the presence of FEMs. We apply this approach, combined with precise model-based eye tracking, to recordings from primary visual cortex (V1), finding that standard approaches that ignore FEMs typically miss more than half of the stimulus-driven neural response variance, creating substantial biases in measures of response reliability. We show that these effects are likely not isolated to the particular experimental conditions used here, such as the choice of visual stimulus or spike measurement time window, and thus will be a more general problem for V1 recordings in awake primates. We also demonstrate that measurements of the stimulus-driven and stimulus-independent correlations among pairs of V1 neurons can be greatly biased by FEMs. These results thus illustrate the potentially dramatic impact of FEMs on measures of signal and noise in visual neuron activity and also demonstrate a novel approach for controlling for these eye-movement-induced effects. SIGNIFICANCE STATEMENT Distinguishing between the signal and noise in a sensory neuron's activity is typically accomplished by measuring neural responses to repeated presentations of an identical stimulus. For recordings from the visual cortex of awake animals, small “fixational” eye movements (FEMs) inevitably introduce trial-to-trial variability in the visual stimulus, potentially confounding such measures. Here, we show that FEMs often have a dramatic impact on several important measures of response variability for neurons in primary visual cortex. We also present an analytical approach for quantifying signal and noise in visual neuron activity in the presence of FEMs. These results thus highlight the importance of controlling for FEMs in studies of visual neuron function, and demonstrate novel methods for doing so. PMID:27277801
Reaction time in pilots during intervals of high sustained g.
Truszczynski, Olaf; Lewkowicz, Rafal; Wojtkowiak, Mieczyslaw; Biernacki, Marcin P
2014-11-01
An important problem for pilots is visual disturbances occurring under +Gz acceleration. Assessment of the degree of intensification of these disturbances is generally accepted as the acceleration tolerance level (ATL) criterion determined in human centrifuges. The aim of this research was to evaluate the visual-motor responses of pilots during rapidly increasing acceleration contained in cyclic intervals of +6 Gz to the maximum ATL. The study involved 40 male pilots ages 32-41 yr. The task was a quick and faultless response to the light stimuli presented on a light bar during exposure to acceleration until reaching the ATL. Simple response time (SRT) measurements were performed using a visual-motor analysis system throughout the exposures which allowed assessment of a pilot's ATL. There were 29 pilots who tolerated the initial phase of interval acceleration and achieved +6 Gz, completing the test at ATL. Relative to the control measurements, the obtained results indicate a significant effect of the applied acceleration on response time. SRT during +6 Gz exposure was not significantly longer compared with the reaction time between each of the intervals. SRT and erroneous reactions indicated no statistically significant differences between the "lower" and "higher" ATL groups. SRT measurements over the +6-Gz exposure intervals did not vary between "lower" and "higher" ATL groups and, therefore, are not useful in predicting pilot performance. The gradual exposure to the maximum value of +6 Gz with exposure to the first three intervals on the +6-Gz plateau effectively differentiated pilots.
Driving time modulates accommodative response and intraocular pressure.
Vera, Jesús; Diaz-Piedra, Carolina; Jiménez, Raimundo; Morales, José M; Catena, Andrés; Cardenas, David; Di Stasi, Leandro L
2016-10-01
Driving is a task mainly reliant on the visual system. Most of the time, while driving, our eyes are constantly focusing and refocusing between the road and the dashboard or near and far traffic. Thus, prolonged driving time should produce visual fatigue. Here, for the first time, we investigated the effects of driving time, a common inducer of driver fatigue, on two ocular parameters: the accommodative response (AR) and the intraocular pressure (IOP). A pre/post-test design has been used to assess the impact of driving time on both indices. Twelve participants (out of 17 recruited) completed the study (5 women, 24.42±2.84years old). The participants were healthy and active drivers with no visual impairment or pathology. They drove for 2h in a virtual driving environment. We assessed AR and IOP before and after the driving session, and also collected subjective measures of arousal and fatigue. We found that IOP and AR decreased (i.e., the accommodative lag increased) after the driving session (p=0.03 and p<0.001, respectively). Moreover, the nearest distances tested (20cm, 25cm, and 33cm) induced the highest decreases in AR (corrected p-values<0.05). Consistent with these findings, the subjective levels of arousal decreased and levels of fatigue increased after the driving session (all p-values<0.001). These results represent an innovative step towards an objective, valid, and reliable assessment of fatigue-impaired driving based on visual fatigue signs. Copyright © 2016 Elsevier Inc. All rights reserved.
Study and response time for the visual recognition of 'similarity' and identity
NASA Technical Reports Server (NTRS)
Derks, P. L.; Bauer, T. M.
1974-01-01
Four subjects compared successively presented pairs of line patterns for a match between any lines in the pattern (similarity) and for a match between all lines (identity). The encoding or study times for pattern recognition from immediate memory and the latency in responses to comparison stimuli were examined. Qualitative differences within and between subjects were most evident in study times.
The development of visual speech perception in Mandarin Chinese-speaking children.
Chen, Liang; Lei, Jianghua
2017-01-01
The present study aimed to investigate the development of visual speech perception in Chinese-speaking children. Children aged 7, 13 and 16 were asked to visually identify both consonant and vowel sounds in Chinese as quickly and accurately as possible. Results revealed (1) an increase in accuracy of visual speech perception between ages 7 and 13 after which the accuracy rate either stagnates or drops; and (2) a U-shaped development pattern in speed of perception with peak performance in 13-year olds. Results also showed that across all age groups, the overall levels of accuracy rose, whereas the response times fell for simplex finals, complex finals and initials. These findings suggest that (1) visual speech perception in Chinese is a developmental process that is acquired over time and is still fine-tuned well into late adolescence; (2) factors other than cross-linguistic differences in phonological complexity and degrees of reliance on visual information are involved in development of visual speech perception.
Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.
Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R
2008-03-01
Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.
A Nonlinear Model for Transient Responses from Light-Adapted Wolf Spider Eyes
DeVoe, Robert D.
1967-01-01
A quantitative model is proposed to test the hypothesis that the dynamics of nonlinearities in retinal action potentials from light-adapted wolf spider eyes may be due to delayed asymmetries in responses of the visual cells. For purposes of calculation, these delayed asymmetries are generated in an analogue by a time-variant resistance. It is first shown that for small incremental stimuli, the linear behavior of such a resistance describes peaking and low frequency phase lead in frequency responses of the eye to sinusoidal modulations of background illumination. It also describes the overshoots in linear step responses. It is next shown that the analogue accounts for nonlinear transient and short term DC responses to large positive and negative step stimuli and for the variations in these responses with changes in degree of light adaptation. Finally, a physiological model is proposed in which the delayed asymmetries in response are attributed to delayed rectification by the visual cell membrane. In this model, cascaded chemical reactions may serve to transduce visual stimuli into membrane resistance changes. PMID:6056011
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
Sustained Attention in Real Classroom Settings: An EEG Study.
Ko, Li-Wei; Komarov, Oleksii; Hairston, W David; Jung, Tzyy-Ping; Lin, Chin-Teng
2017-01-01
Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra.
Sustained Attention in Real Classroom Settings: An EEG Study
Ko, Li-Wei; Komarov, Oleksii; Hairston, W. David; Jung, Tzyy-Ping; Lin, Chin-Teng
2017-01-01
Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra. PMID:28824396
Real-Time Visualization of Network Behaviors for Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Best, Daniel M.; Bohn, Shawn J.; Love, Douglas V.
Plentiful, complex, and dynamic data make understanding the state of an enterprise network difficult. Although visualization can help analysts understand baseline behaviors in network traffic and identify off-normal events, visual analysis systems often do not scale well to operational data volumes (in the hundreds of millions to billions of transactions per day) nor to analysis of emergent trends in real-time data. We present a system that combines multiple, complementary visualization techniques coupled with in-stream analytics, behavioral modeling of network actors, and a high-throughput processing platform called MeDICi. This system provides situational understanding of real-time network activity to help analysts takemore » proactive response steps. We have developed these techniques using requirements gathered from the government users for which the tools are being developed. By linking multiple visualization tools to a streaming analytic pipeline, and designing each tool to support a particular kind of analysis (from high-level awareness to detailed investigation), analysts can understand the behavior of a network across multiple levels of abstraction.« less
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Impaired Visual Attention in Children with Dyslexia.
ERIC Educational Resources Information Center
Heiervang, Einar; Hugdahl, Kenneth
2003-01-01
A cue-target visual attention task was administered to 25 children (ages 10-12) with dyslexia. Results showed a general pattern of slower responses in the children with dyslexia compared to controls. Subjects also had longer reaction times in the short and long cue-target interval conditions (covert and overt shift of attention). (Contains…
Mahr, Angela; Wentura, Dirk
2014-02-01
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2017-09-01
The ability to catch objects when transiently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, interception of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5+ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5+, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer-simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1 , trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5+ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2 , trajectories were entirely visible and participants received tpTMS on TPJ and hMT/V5+ with same timing as experiment 1 tpTMS of TPJ, hMT/V5+, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. NEW & NOTEWORTHY Visual extrapolation represents a potential neural solution to afford motor interactions with the environment in the face of missing information. We investigated relative contributions by temporoparietal junction (TPJ), hMT/V5+, and intraparietal cortex (IPS), cortical areas potentially involved in these processes. Parallel organization of visual extrapolation processes emerged with respect to the target's motion causal nature: TPJ was primarily involved for visual motion congruent with gravity effects, IPS for arbitrary visual motion, whereas hMT/V5+ contributed at earlier processing stages. Copyright © 2017 the American Physiological Society.
Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station
NASA Technical Reports Server (NTRS)
Bendrick, Gregg A.; Kamine, Tovy Haber
2008-01-01
Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. "cones") of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement" (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Methods: Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. Results: The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of "Maximum Eye Movement". However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of "Easy Eye Movement", though all were within the cone of "Maximum Eye Movement". All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Discussion: Most instrument displays in conventional aircraft lay within the cone of "Easy Eye Movement", though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight.
Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.
Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A
2017-03-01
The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.
Romano, Mary; Iacovello, Daniela; Cascone, Nikhil C; Contestabile, Maria Teresa
2011-01-01
To document the clinical, functional, and in vivo microanatomic characteristics of a patient with Gorlin-Goltz syndrome with a novel nonsense mutation in PTCH (patched). Optical coherence tomography (OCT), fluorescein angiography, electrophysiologic testing, visual field, magnetic resonance imaging, and mutation screening of PTCH gene. Visual acuity was 20/20 in the right eye and 20/25 in the left. Fundus examination revealed myelinated nerve fibers in the left eye and bilateral epiretinal membranes with lamellar macular hole also documented with macular OCT. A reduction of the retinal nerve fiber layers in both eyes was found with fiber nervous OCT. Fluorescein angiography showed bilaterally foveal hyperfluorescence and the visual field revealed inferior hemianopia in the right eye. Pattern visual evoked potentials registered a reduction of amplitude in both eyes and latency was delayed in the left eye. Pattern electroretinogram showed a reduction in P50 and N95 peak time and a delay in P50 peak time in the left eye. Flash electroretinogram was reduced in rod response, maximal response, and oscillatory potentials in both eyes. Cone response was normal and 30-Hz flicker was slightly reduced in both eyes. Mutation screening identified a novel nonsense mutation in PTCH. A novel nonsense mutation in the PTCH gene was found. We report the occurrence of epiretinal membranes and the persistence of myelinated nerve fibers. Electrophysiologic and visual field alterations, supporting a neuroretinal dysfunction, were also documented.
Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex
Singer, Wolf; Maass, Wolfgang
2009-01-01
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205
Chen, Xia; Fu, Junhong; Cheng, Wenbo; Song, Desheng; Qu, Xiaolei; Yang, Zhuo; Zhao, Kanxing
2017-01-01
Visual deprivation during the critical period induces long-lasting changes in cortical circuitry by adaptively modifying neuro-transmission and synaptic connectivity at synapses. Spike timing-dependent plasticity (STDP) is considered a strong candidate for experience-dependent changes. However, the visual deprivation forms that affect timing-dependent long-term potentiation(LTP) and long-term depression(LTD) remain unclear. Here, we demonstrated the temporal window changes of tLTP and tLTD, elicited by coincidental pre- and post-synaptic firing, following different modes of 6-day visual deprivation. Markedly broader temporal windows were found in robust tLTP and tLTD in the V1M of the deprived visual cortex in mice after 6-day MD and DE. The underlying mechanism for the changes seen with visual deprivation in juvenile mice using 6 days of dark exposure or monocular lid suture involves an increased fraction of NR2b-containing NMDAR and the consequent prolongation of NMDAR-mediated response duration. Moreover, a decrease in NR2A protein expression at the synapse is attributable to the reduction of the NR2A/2B ratio in the deprived cortex. PMID:28520739
Goetz, Georges; Smith, Richard; Lei, Xin; Galambos, Ludwig; Kamins, Theodore; Mathieson, Keith; Sher, Alexander; Palanker, Daniel
2015-01-01
Purpose To evaluate the contrast sensitivity of a degenerate retina stimulated by a photovoltaic subretinal prosthesis, and assess the impact of low contrast sensitivity on transmission of visual information. Methods We measure ex vivo the full-field contrast sensitivity of healthy rat retina stimulated with white light, and the contrast sensitivity of degenerate rat retina stimulated with a subretinal prosthesis at frequencies exceeding flicker fusion (>20 Hz). Effects of eye movements on retinal ganglion cell (RGC) activity are simulated using a linear–nonlinear model of the retina. Results Retinal ganglion cells adapt to high frequency stimulation of constant intensity, and respond transiently to changes in illumination of the implant, exhibiting responses to ON-sets, OFF-sets, and both ON- and OFF-sets of light. The percentage of cells with an OFF response decreases with progression of the degeneration, indicating that OFF responses are likely mediated by photoreceptors. Prosthetic vision exhibits reduced contrast sensitivity and dynamic range, with 65% contrast changes required to elicit responses, as compared to the 3% (OFF) to 7% (ON) changes with visible light. The maximum number of action potentials elicited with prosthetic stimulation is at most half of its natural counterpart for the ON pathway. Our model predicts that for most visual scenes, contrast sensitivity of prosthetic vision is insufficient for triggering RGC activity by fixational eye movements. Conclusions Contrast sensitivity of prosthetic vision is 10 times lower than normal, and dynamic range is two times below natural. Low contrast sensitivity and lack of OFF responses hamper delivery of visual information via a subretinal prosthesis. PMID:26540657
Oberstein, Sharon L; Boon, Mei Ying; Chu, Byoung Sun; Wood, Joanne M
2016-09-01
Eye-care practitioners are often required to make recommendations regarding their patients' visual fitness for driving, including patients with visual impairment. This study aimed to understand the perspectives and management strategies adopted by optometrists regarding driving for their patients with central visual impairment. Optometrists were invited to participate in an online survey (from April to June 2012). Items were designed to explore the views and practices adopted by optometrists regarding driving for patients with central visual impairment (visual acuity [VA] poorer than 6/12, normal visual fields, cognitive and physical health), including conditional driver's licences and bioptic telescopes. Closed- and open-ended questions were used. The response rate was 14 per cent (n = 300 valid responses were received). Most respondents (83 per cent) reported that they advised their patients with visual impairment to 'always' or 'sometimes' stop driving. Most were confident in interpreting the visual licensing standards (78 per cent) and advising on legal responsibilities concerning driving (99 per cent). Respondents were familiar with VA requirements for unconditional licensing (98 per cent); however, the median response VA of 6/15 as the poorest VA suggested for conditional licences differed from international practice and Australian medical guidelines released a month prior to the survey's launch. Few respondents reported prescribing bioptic telescopes (two per cent). While 97 per cent of respondents stated that they discussed conditional licences with their patients with visual impairment, relatively few (28 per cent) reported having completed conditional licence applications for such individuals in the previous year. Those who had completed applications were more experienced in years of practice (p = 0.02) and spent more time practising in rural locations (p = 0.03) than those who had not. The majority of Australian optometrists were receptive to the possibilities of driving options for individuals with central visual impairment, although management approaches varied with respect to conditional licensing. © 2016 Optometry Australia.
[Multifocal visual electrophysiology in visual function evaluation].
Peng, Shu-Ya; Chen, Jie-Min; Liu, Rui-Jue; Zhou, Shu; Liu, Dong-Mei; Xia, Wen-Tao
2013-08-01
Multifocal visual electrophysiology, consisting of multifocal electroretinography (mfERG) and multifocal visual evoked potential (mfVEP), can objectively evaluate retina function and retina-cortical conduction pathway status by stimulating many local retinal regions and obtaining each local response simultaneously. Having many advantages such as short testing time and high sensitivity, it has been widely used in clinical ophthalmology, especially in the diagnosis of retinal disease and glaucoma. It is a new objective technique in clinical forensic medicine involving visual function evaluation of ocular trauma in particular. This article summarizes the way of stimulation, the position of electrodes, the way of analysis, the visual function evaluation of mfERG and mfVEP, and discussed the value of multifocal visual electrophysiology in forensic medicine.
Alvarez, George A.; Nakayama, Ken; Konkle, Talia
2016-01-01
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing. PMID:27832600
Insect cyborgs: a new frontier in flight control systems
NASA Astrophysics Data System (ADS)
Reissman, Timothy; Crawford, Jackie H.; Garcia, Ephrahim
2007-04-01
The development of a micro-UAV via a cybernetic organism, primarily the Manduca sexta moth, is presented. An observer to gather output data of the system response of the moth is given by means of an image following system. The visual tracking was implemented to gather the required information about the time history of the moth's six degrees of freedom. This was performed with three cameras tracking a white line as a marker on the moth's thorax to maximize contrast between the moth and the marker. Evaluation of the implemented six degree of freedom visual tracking system finds precision greater than 0.1 mm within three standard deviations and accuracy on the order of 1 mm. Acoustic and visual response systems are presented to lay the groundwork for creating a stochastic response catalog of the organisms to varied stimuli.
Pattern reversal responses in man and cat: a comparison.
Schuurmans, R P; Berninger, T
1984-01-01
In 42 enucleated and arterially perfused cat eyes, graded potentials were recorded from the retina (ERG) and from the optic nerve ( ONR ) in response to checker-board stimuli, reversing at a low temporal frequency in a square wave mode. The ERG and ONR responses show an almost perfect duplication of the response to each reversal of the pattern and exhibit, in contrast to luminance responses, striking similarities in response characteristics such as amplitude, wave shape and time course. Furthermore, the amplitude versus check size plots coincide in both responses. In cat, pattern reversal responses can be recorded from 74 to 9 min of arc, correlating to the cat's visual resolution. In man, almost identical responses can be recorded for the pattern ERG. However, in accordance with the difference in visual resolution in man and cat, a parallel shift for the human pattern reversal ERG response to higher spatial frequencies is observed.
Aging and feature search: the effect of search area.
Burton-Danner, K; Owsley, C; Jackson, G R
2001-01-01
The preattentive system involves the rapid parallel processing of visual information in the visual scene so that attention can be directed to meaningful objects and locations in the environment. This study used the feature search methodology to examine whether there are aging-related deficits in parallel-processing capabilities when older adults are required to visually search a large area of the visual field. Like young subjects, older subjects displayed flat, near-zero slopes for the Reaction Time x Set Size function when searching over a broad area (30 degrees radius) of the visual field, implying parallel processing of the visual display. These same older subjects exhibited impairment in another task, also dependent on parallel processing, performed over the same broad field area; this task, called the useful field of view test, has more complex task demands. Results imply that aging-related breakdowns of parallel processing over a large visual field area are not likely to emerge when required responses are simple, there is only one task to perform, and there is no limitation on visual inspection time.
Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego; Matute, Helena
2014-01-01
Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments. PMID:24409318
Face-selective neurons maintain consistent visual responses across months
McMahon, David B. T.; Jones, Adam P.; Bondar, Igor V.; Leopold, David A.
2014-01-01
Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity. PMID:24799679
Face-selective neurons maintain consistent visual responses across months.
McMahon, David B T; Jones, Adam P; Bondar, Igor V; Leopold, David A
2014-06-03
Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity.
Art for reward's sake: visual art recruits the ventral striatum.
Lacey, Simon; Hagtvedt, Henrik; Patrick, Vanessa M; Anderson, Amy; Stilla, Randall; Deshpande, Gopikrishna; Hu, Xiaoping; Sato, João R; Reddy, Srinivas; Sathian, K
2011-03-01
A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non-art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value. Copyright © 2010 Elsevier Inc. All rights reserved.
ART FOR REWARD’S SAKE: VISUAL ART RECRUITS THE VENTRAL STRIATUM
Lacey, Simon; Hagtvedt, Henrik; Patrick, Vanessa M.; Anderson, Amy; Stilla, Randall; Deshpande, Gopikrishna; Hu, Xiaoping; Sato, João R.; Reddy, Srinivas; Sathian, K.
2010-01-01
A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non -art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value. PMID:21111833
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463
Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg
2016-01-01
Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.
Senot, Patrice; Zago, Myrka; Lacquaniti, Francesco; McIntyre, Joseph
2005-12-01
Intercepting an object requires a precise estimate of its time of arrival at the interception point (time to contact or "TTC"). It has been proposed that knowledge about gravitational acceleration can be combined with first-order, visual-field information to provide a better estimate of TTC when catching falling objects. In this experiment, we investigated the relative role of visual and nonvisual information on motor-response timing in an interceptive task. Subjects were immersed in a stereoscopic virtual environment and asked to intercept with a virtual racket a ball falling from above or rising from below. The ball moved with different initial velocities and could accelerate, decelerate, or move at a constant speed. Depending on the direction of motion, the acceleration or deceleration of the ball could therefore be congruent or not with the acceleration that would be expected due to the force of gravity acting on the ball. Although the best success rate was observed for balls moving at a constant velocity, we systematically found a cross-effect of ball direction and acceleration on success rate and response timing. Racket motion was triggered on average 25 ms earlier when the ball fell from above than when it rose from below, whatever the ball's true acceleration. As visual-flow information was the same in both cases, this shift indicates an influence of the ball's direction relative to gravity on response timing, consistent with the anticipation of the effects of gravity on the flight of the ball.
D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando
2015-01-01
The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697
The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.
Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal
2016-01-01
Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.
Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque
Kaneko, Takaaki; Saleem, Kadharbatcha S.; Berman, Rebecca A.; Leopold, David A.
2016-01-01
Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. SIGNIFICANCE STATEMENT Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This “reafferent” motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. PMID:27629710
Distinct fMRI Responses to Self-Induced versus Stimulus Motion during Free Viewing in the Macaque.
Russ, Brian E; Kaneko, Takaaki; Saleem, Kadharbatcha S; Berman, Rebecca A; Leopold, David A
2016-09-14
Visual motion responses in the brain are shaped by two distinct sources: the physical movement of objects in the environment and motion resulting from one's own actions. The latter source, termed visual reafference, stems from movements of the head and body, and in primates from the frequent saccadic eye movements that mark natural vision. To study the relative contribution of reafferent and stimulus motion during natural vision, we measured fMRI activity in the brains of two macaques as they freely viewed >50 hours of naturalistic video footage depicting dynamic social interactions. We used eye movements obtained during scanning to estimate the level of reafferent retinal motion at each moment in time. We also estimated the net stimulus motion by analyzing the video content during the same time periods. Mapping the responses to these distinct sources of retinal motion, we found a striking dissociation in the distribution of visual responses throughout the brain. Reafferent motion drove fMRI activity in the early retinotopic areas V1, V2, V3, and V4, particularly in their central visual field representations, as well as lateral aspects of the caudal inferotemporal cortex (area TEO). However, stimulus motion dominated fMRI responses in the superior temporal sulcus, including areas MT, MST, and FST as well as more rostral areas. We discuss this pronounced separation of motion processing in the context of natural vision, saccadic suppression, and the brain's utilization of corollary discharge signals. Visual motion arises not only from events in the external world, but also from the movements of the observer. For example, even if objects are stationary in the world, the act of walking through a room or shifting one's eyes causes motion on the retina. This "reafferent" motion propagates into the brain as signals that must be interpreted in the context of real object motion. The delineation of whole-brain responses to stimulus versus self-generated retinal motion signals is critical for understanding visual perception and is of pragmatic importance given the increasing use of naturalistic viewing paradigms. The present study uses fMRI to demonstrate that the brain exhibits a fundamentally different pattern of responses to these two sources of retinal motion. Copyright © 2016 the authors 0270-6474/16/369580-10$15.00/0.
Visual motion perception predicts driving hazard perception ability.
Lacherez, Philippe; Au, Sandra; Wood, Joanne M
2014-02-01
To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. A total of 36 visually normal participants (aged 19-80 years) completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus and sensitivity for displacement in a random dot kinematogram (Dmin ). Participants also completed a hazard perception test (HPT), which measured participants' response times to hazards embedded in video recordings of real-world driving, which has been shown to be linked to crash risk. Dmin for the random dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception to develop better interventions to improve road safety. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search
ERIC Educational Resources Information Center
Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.
2010-01-01
Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…
An Integrated Theory of Attention and Decision Making in Visual Signal Detection
ERIC Educational Resources Information Center
Smith, Philip L.; Ratcliff, Roger
2009-01-01
The simplest attentional task, detecting a cued stimulus in an otherwise empty visual field, produces complex patterns of performance. Attentional cues interact with backward masks and with spatial uncertainty, and there is a dissociation in the effects of these variables on accuracy and on response time. A computational theory of performance in…
ERIC Educational Resources Information Center
Hout, Michael C.; Goldinger, Stephen D.
2012-01-01
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no…
The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words
ERIC Educational Resources Information Center
Lázaro, Miguel; Sainz, Javier; Illera, Víctor
2015-01-01
In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…
NASA Astrophysics Data System (ADS)
Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.
2006-05-01
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.
Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.
Reimers, Stian; Stewart, Neil
2016-09-01
Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.
Which technology to investigate visual perception in sport: video vs. virtual reality.
Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit
2015-02-01
Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.
Implicit short- and long-term memory direct our gaze in visual search.
Kruijne, Wouter; Meeter, Martijn
2016-04-01
Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.
Zhaoping, Li; Zhe, Li
2012-01-01
From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target. PMID:22719829
Zhaoping, Li; Zhe, Li
2012-01-01
From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.
Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; LeCheminant, James D
2016-03-01
The extent that neural responsiveness to visual food stimuli is influenced by time of day is not well examined. Using a crossover design, 15 healthy women were scanned using fMRI while presented with low- and high-energy pictures of food, once in the morning (6:30-8:30 am) and once in the evening (5:00-7:00 pm). Diets were identical on both days of the fMRI scans and were verified using weighed food records. Visual analog scales were used to record subjective perception of hunger and preoccupation with food prior to each fMRI scan. Six areas of the brain showed lower activation in the evening to both high- and low-energy foods, including structures in reward pathways (P < 0.05). Nine brain regions showed significantly higher activation for high-energy foods compared to low-energy foods (P < 0.05). High-energy food stimuli tended to produce greater fMRI responses than low-energy food stimuli in specific areas of the brain, regardless of time of day. However, evening scans showed a lower response to both low- and high-energy food pictures in some areas of the brain. Subjectively, participants reported no difference in hunger by time of day (F = 1.84, P = 0.19), but reported they could eat more (F = 4.83, P = 0.04) and were more preoccupied with thoughts of food (F = 5.51, P = 0.03) in the evening compared to the morning. These data underscore the role that time of day may have on neural responses to food stimuli. These results may also have clinical implications for fMRI measurement in order to prevent a time of day bias.
Johari, Karim; Behroozmand, Roozbeh
2017-05-01
The predictive coding model suggests that neural processing of sensory information is facilitated for temporally-predictable stimuli. This study investigated how temporal processing of visually-presented sensory cues modulates movement reaction time and neural activities in speech and hand motor systems. Event-related potentials (ERPs) were recorded in 13 subjects while they were visually-cued to prepare to produce a steady vocalization of a vowel sound or press a button in a randomized order, and to initiate the cued movement following the onset of a go signal on the screen. Experiment was conducted in two counterbalanced blocks in which the time interval between visual cue and go signal was temporally-predictable (fixed delay at 1000 ms) or unpredictable (variable between 1000 and 2000 ms). Results of the behavioral response analysis indicated that movement reaction time was significantly decreased for temporally-predictable stimuli in both speech and hand modalities. We identified premotor ERP activities with a left-lateralized parietal distribution for hand and a frontocentral distribution for speech that were significantly suppressed in response to temporally-predictable compared with unpredictable stimuli. The premotor ERPs were elicited approximately -100 ms before movement and were significantly correlated with speech and hand motor reaction times only in response to temporally-predictable stimuli. These findings suggest that the motor system establishes a predictive code to facilitate movement in response to temporally-predictable sensory stimuli. Our data suggest that the premotor ERP activities are robust neurophysiological biomarkers of such predictive coding mechanisms. These findings provide novel insights into the temporal processing mechanisms of speech and hand motor systems.
Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial
2015-09-01
Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.
Stojmenova, Kristina; Sodnik, Jaka
2018-07-04
There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.
Carvalho, Paulo S. M.; Noltie, Douglas B.; Tillitt, D.E.
2004-01-01
Retinal structure and concentration of retinoids involved in phototransduction changed during early development of rainbow trout Oncorhynchus mykiss, correlating with improvements in visual function. A test chamber was used to evaluate the presence of optokinetic or optomotor responses and to assess the functionality of the integrated cellular, physiological and biochemical components of the visual system. The results indicated that in rainbow trout optomotor responses start at 10 days post-hatch, and demonstrated for the first time that increases in acuity, sensitivity to low light as well as in motion detection abilities occur from this stage until exogenous feeding starts. The structure of retinal cells such as cone ellipsoids increased in length as photopic visual acuity improved, and rod densities increased concurrently with improvements in scotopic thresholds (2.2 log10 units). An increase in the concentrations of the chromophore all-trans-retinal correlated with improvements of all behavioural measures of visual function during the same developmental phase. ?? 2004 The Fisheries Society of the British Isles.
Conspicuity of target lights: The influence of color
NASA Technical Reports Server (NTRS)
Connors, M. M.
1975-01-01
The conspicuity (or attention-getting qualities) were investigated of foveally-equated, colored lights, when seen against a star background. Subjects who were periodically engaged in a distracting cockpit task were required to search a large visual field and report the appearance of a target light as quickly as possible. Targets were red, yellow, white, green, and blue, and appeared either as steady or as flashing lights. Results indicate that red targets were missed more frequently and responded to more slowly than lights of other hues. Yellow targets were acquired more slowly than white, green, or blue targets; responses to white targets were significantly slower than responses to green or blue targets. In general, flashing lights were superior to steady lights, but this was not found for all hues. For red, the 2 Hz flash was superior to all other flash rates and to the steady light, none of which differed significantly from each other. Over all hues, conspicuity was found to peak at 2-3 Hz. Response time was found to be fastest, generally, for targets appearing at between 3 and 8 from the center of the visual field. However, this pattern was not repeated for every hue. Conspicuity response times suggest a complex relationship between hue and position in the visual field that is explained only partially by retinal sensitivity.
Two memories for geographical slant: separation and interdependence of action and awareness
NASA Technical Reports Server (NTRS)
Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)
1998-01-01
The present study extended previous findings of geographical slant perception, in which verbal judgments of the incline of hills were greatly overestimated but motoric (haptic) adjustments were much more accurate. In judging slant from memory following a brief or extended time delay, subjects' verbal judgments were greater than those given when viewing hills. Motoric estimates differed depending on the length of the delay and place of response. With a short delay, motoric adjustments made in the proximity of the hill did not differ from those evoked during perception. When given a longer delay or when taken away from the hill, subjects' motoric responses increased along with the increase in verbal reports. These results suggest two different memorial influences on action. With a short delay at the hill, memory for visual guidance is separate from the explicit memory informing the conscious response. With short or long delays away from the hill, short-term visual guidance memory no longer persists, and both motor and verbal responses are driven by an explicit representation. These results support recent research involving visual guidance from memory, where actions become influenced by conscious awareness, and provide evidence for communication between the "what" and "how" visual processing systems.
Farivar, Reza; Thompson, Benjamin; Mansouri, Behzad; Hess, Robert F
2011-12-20
Factors such as strabismus or anisometropia during infancy can disrupt normal visual development and result in amblyopia, characterized by reduced visual function in an otherwise healthy eye and often associated with persistent suppression of inputs from the amblyopic eye by those from the dominant eye. It has become evident from fMRI studies that the cortical response to stimulation of the amblyopic eye is also affected. We were interested to compare the hemodynamic response function (HRF) of early visual cortex to amblyopic vs. dominant eye stimulation. In the first experiment, we found that stimulation of the amblyopic eye resulted in a signal that was both attenuated and delayed in its time to peak. We postulated that this delay may be due to suppressive effects of the dominant eye and, in our second experiment, measured the cortical response of amblyopic eye stimulation under two conditions--where the dominant eye was open and seeing a static pattern (high suppression) or where the dominant eye was patched and closed (low suppression). We found that the HRF in response to amblyopic eye stimulation depended on whether the dominant eye was open. This effect was manifested as both a delayed HRF under the suppressed condition and an amplitude reduction.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Aesthetic Response and Cosmic Aesthetic Distance
NASA Astrophysics Data System (ADS)
Madacsi, D.
2013-04-01
For Homo sapiens, the experience of a primal aesthetic response to nature was perhaps a necessary precursor to the arousal of an artistic impulse. Among the likely visual candidates for primal initiators of aesthetic response, arguments can be made in favor of the flower, the human face and form, and the sky and light itself as primordial aesthetic stimulants. Although visual perception of the sensory world of flowers and human faces and forms is mediated by light, it was most certainly in the sky that humans first could respond to the beauty of light per se. It is clear that as a species we do not yet identify and comprehend as nature, or part of nature, the entire universe beyond our terrestrial environs, the universe from which we remain inexorably separated by space and time. However, we now enjoy a technologically-enabled opportunity to probe the ultimate limits of visual aesthetic distance and the origins of human aesthetic response as we remotely explore deep space via the Hubble Space Telescope and its successors.
Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro
2010-08-01
In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Locomotion Enhances Neural Encoding of Visual Stimuli in Mouse V1
2017-01-01
Neurons in mouse primary visual cortex (V1) are selective for particular properties of visual stimuli. Locomotion causes a change in cortical state that leaves their selectivity unchanged but strengthens their responses. Both locomotion and the change in cortical state are thought to be initiated by projections from the mesencephalic locomotor region, the latter through a disinhibitory circuit in V1. By recording simultaneously from a large number of single neurons in alert mice viewing moving gratings, we investigated the relationship between locomotion and the information contained within the neural population. We found that locomotion improved encoding of visual stimuli in V1 by two mechanisms. First, locomotion-induced increases in firing rates enhanced the mutual information between visual stimuli and single neuron responses over a fixed window of time. Second, stimulus discriminability was improved, even for fixed population firing rates, because of a decrease in noise correlations across the population. These two mechanisms contributed differently to improvements in discriminability across cortical layers, with changes in firing rates most important in the upper layers and changes in noise correlations most important in layer V. Together, these changes resulted in a threefold to fivefold reduction in the time needed to precisely encode grating direction and orientation. These results support the hypothesis that cortical state shifts during locomotion to accommodate an increased load on the visual system when mice are moving. SIGNIFICANCE STATEMENT This paper contains three novel findings about the representation of information in neurons within the primary visual cortex of the mouse. First, we show that locomotion reduces by at least a factor of 3 the time needed for information to accumulate in the visual cortex that allows the distinction of different visual stimuli. Second, we show that the effect of locomotion is to increase information in cells of all layers of the visual cortex. Third, we show that the means by which information is enhanced by locomotion differs between the upper layers, where the major effect is the increasing of firing rates, and in layer V, where the major effect is the reduction in noise correlations. PMID:28264980
Default Mode Network (DMN) Deactivation during Odor-Visual Association
Karunanayaka, Prasanna R.; Wilson, Donald A.; Tobia, Michael J.; Martinez, Brittany; Meadowcroft, Mark; Eslinger, Paul J.; Yang, Qing X.
2017-01-01
Default mode network (DMN) deactivation has been shown to be functionally relevant for goal-directed cognition. In this study, we investigated the DMN’s role during olfactory processing using two complementary functional magnetic resonance imaging (fMRI) paradigms with identical timing, visual-cue stimulation and response monitoring protocols. Twenty-nine healthy, non-smoking, right-handed adults (mean age = 26±4 yrs., 16 females) completed an odor-visual association fMRI paradigm that had two alternating odor+visual and visual-only trial conditions. During odor+visual trials, a visual cue was presented simultaneously with an odor, while during visual-only trial conditions the same visual cue was presented alone. Eighteen of the 29 participants (mean age = 27.0 ± 6.0 yrs.,11 females) also took part in a control no-odor fMRI paradigm that consisted of visual-only trial conditions which were identical to the visual-only trials in the odor-visual association paradigm. We used Independent Component Analysis (ICA), extended unified structural equation modeling (euSEM), and psychophysiological interaction (PPI) to investigate the interplay between the DMN and olfactory network. In the odor-visual association paradigm, DMN deactivation was evoked by both the odor+visual and visual-only trial conditions. In contrast, the visual-only trials in the no-odor paradigm did not evoke consistent DMN deactivation. In the odor-visual association paradigm, the euSEM and PPI analyses identified a directed connectivity between the DMN and olfactory network which was significantly different between odor+visual and visual-only trial conditions. The results support a strong interaction between the DMN and olfactory network and highlights DMN’s role in task-evoked brain activity and behavioral responses during olfactory processing. PMID:27785847
Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data
Zhaoping, Li; Zhe, Li
2015-01-01
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341
The Use of Virtual Reality in Psychology: A Case Study in Visual Perception
Wilson, Christopher J.; Soranzo, Alessandro
2015-01-01
Recent proliferation of available virtual reality (VR) tools has seen increased use in psychological research. This is due to a number of advantages afforded over traditional experimental apparatus such as tighter control of the environment and the possibility of creating more ecologically valid stimulus presentation and response protocols. At the same time, higher levels of immersion and visual fidelity afforded by VR do not necessarily evoke presence or elicit a “realistic” psychological response. The current paper reviews some current uses for VR environments in psychological research and discusses some ongoing questions for researchers. Finally, we focus on the area of visual perception, where both the advantages and challenges of VR are particularly salient. PMID:26339281
Study of target and non-target interplay in spatial attention task.
Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree
2018-02-01
Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.
Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.
2016-01-01
Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688
Implicit visual learning and the expression of learning.
Haider, Hilde; Eberhardt, Katharina; Kunde, Alexander; Rose, Michael
2013-03-01
Although the existence of implicit motor learning is now widely accepted, the findings concerning perceptual implicit learning are ambiguous. Some researchers have observed perceptual learning whereas other authors have not. The review of the literature provides different reasons to explain this ambiguous picture, such as differences in the underlying learning processes, selective attention, or differences in the difficulty to express this knowledge. In three experiments, we investigated implicit visual learning within the original serial reaction time task. We used different response devices (keyboard vs. mouse) in order to manipulate selective attention towards response dimensions. Results showed that visual and motor sequence learning differed in terms of RT-benefits, but not in terms of the amount of knowledge assessed after training. Furthermore, visual sequence learning was modulated by selective attention. However, the findings of all three experiments suggest that selective attention did not alter implicit but rather explicit learning processes. Copyright © 2012 Elsevier Inc. All rights reserved.
Response-dependent dynamics of cell-specific inhibition in cortical networks in vivo
El-Boustani, Sami; Sur, Mriganka
2014-01-01
In the visual cortex, inhibitory neurons alter the computations performed by target cells via combination of two fundamental operations, division and subtraction. The origins of these operations have been variously ascribed to differences in neuron classes, synapse location or receptor conductances. Here, by utilizing specific visual stimuli and single optogenetic probe pulses, we show that the function of parvalbumin-expressing and somatostatin-expressing neurons in mice in vivo is governed by the overlap of response timing between these neurons and their targets. In particular, somatostatin-expressing neurons respond at longer latencies to small visual stimuli compared with their target neurons and provide subtractive inhibition. With large visual stimuli, however, they respond at short latencies coincident with their target cells and switch to provide divisive inhibition. These results indicate that inhibition mediated by these neurons is a dynamic property of cortical circuits rather than an immutable property of neuronal classes. PMID:25504329
Auditory and visual interhemispheric communication in musicians and non-musicians.
Woelfle, Rebecca; Grahn, Jessica A
2013-01-01
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.
Menon, Vimla; Chaudhuri, Zia; Saxena, Rohit; Gill, Kulwant; Sachdeva, M M
2005-12-01
Amblyopia is one of the most common causes of visual impairment in adults and children, and visual loss may be permanent if not treated in time. Though many studies have been done on occlusion therapy which is the mainstay in the treatment of unilateral amblyopia, discrepancies exist in literature about quantification of treatment and follow up measures. The present study was undertaken to evaluate the factors responsible for the successful outcome of treatment and the optimum time required for the same in children with unilateral amblyopia. Baseline characteristics of 63 verbal patients with unilateral amblyopia (strabismic, anisometropic, mixed) referred to the Strabismus and Amblyopia Clinic at the Dr Rajendra Prasad Centre for Ophthalmic Sciences, New Delhi between September 2001 to December 2002 who improved to the desired level of visual acuity after treatment for amblyopia in the mentioned time period, were analyzed to assess for factors that directly or indirectly influenced the optimum visual rehabilitation and the average duration of therapy required for the same. The evaluation included assessment of the baseline best-corrected visual acuity (BCVA) and refractive status in both eyes, the age at presentation, the type of amblyopia present, fixation pattern in the amblyopic eye, inter-eye visual acuity difference, and evaluation of compliance through a parental diary system. Baseline BCVA in the amblyopic eye was similar in all the three groups. Patients with anisometropic amblyopia showed a quicker response to therapy. Compliance to treatment was the major factor affecting the overall time required for a successful outcome in most cases. The overall time required for the treatment to be successful (including the period of maintenance) was about 1,089 h. This hospital-based study showed that the average duration of occlusion therapy to achieve stable isoacuity was 7.2 months with an average occlusion of 6-7 h/day. Compliance to therapy was the most important factor affecting the duration of therapy. With increasing emphasis on paediatric eye diseases, amblyopia is at last getting its due importance as a cause of treatable correctable paediatric visual impairment which can have lifelong repercussions, both in terms of individual disability and financial burden to the society if not treated in time. As the therapy is simple and effective if started early, mass awareness, visual screening, and counselling would go a long way in treating the patients, thus decreasing the prevalence of amblyopia in the country.
Properties of visual evoked potentials to onset of movement on a television screen.
Kubová, Z; Kuba, M; Hubacek, J; Vít, F
1990-08-01
In 80 subjects the dependence of movement-onset visual evoked potentials on some measures of stimulation was examined, and these responses were compared with pattern-reversal visual evoked potentials to verify the effectiveness of pattern movement application for visual evoked potential acquisition. Horizontally moving vertical gratings were generated on a television screen. The typical movement-onset reactions were characterized by one marked negative peak only, with a peak time between 140 and 200 ms. In all subjects the sufficient stimulus duration for acquisition of movement-onset-related visual evoked potentials was 100 ms; in some cases it was only 20 ms. Higher velocity (5.6 degree/s) produced higher amplitudes of movement-onset visual evoked potentials than did the lower velocity (2.8 degrees/s). In 80% of subjects, the more distinct reactions were found in the leads from lateral occipital areas (in 60% from the right hemisphere), with no correlation to handedness of subjects. Unlike pattern-reversal visual evoked potentials, the movement-onset responses tended to be larger to extramacular stimulation (annular target of 5 degrees-9 degrees) than to macular stimulation (circular target of 5 degrees diameter).
Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan
2015-01-16
We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.
Kubanek, J; Wang, C; Snyder, L H
2013-11-01
We often look at and sometimes reach for visible targets. Looking at a target is fast and relatively easy. By comparison, reaching for an object is slower and is associated with a larger cost. We hypothesized that, as a result of these differences, abrupt visual onsets may drive the circuits involved in saccade planning more directly and with less intermediate regulation than the circuits involved in reach planning. To test this hypothesis, we recorded discharge activity of neurons in the parietal oculomotor system (area LIP) and in the parietal somatomotor system (area PRR) while monkeys performed a visually guided movement task and a choice task. We found that in the visually guided movement task LIP neurons show a prominent transient response to target onset. PRR neurons also show a transient response, although this response is reduced in amplitude, is delayed, and has a slower rise time compared with LIP. A more striking difference is observed in the choice task. The transient response of PRR neurons is almost completely abolished and replaced with a slow buildup of activity, while the LIP response is merely delayed and reduced in amplitude. Our findings suggest that the oculomotor system is more closely and obligatorily coupled to the visual system, whereas the somatomotor system operates in a more discriminating manner.
Interactive visualization of vegetation dynamics
Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James
2001-01-01
Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.
A unique role of endogenous visual-spatial attention in rapid processing of multiple targets
Guzman, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru
2012-01-01
Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions). We report that endogenous attention uniquely contributes to processing of multiple targets. For speeded visual discrimination, response times are faster for multiple redundant targets than for single targets due to probability summation and/or signal integration. This redundancy gain was unaffected when attention was exogenously diverted from the targets, but was completely eliminated when attention was endogenously diverted. This was not due to weaker manipulation of exogenous attention because our exogenous and endogenous cues similarly affected overall response times. Thus, whereas exogenous attention is superior for processing single targets, endogenous attention plays a unique role in allocating resources crucial for rapid concurrent processing of multiple targets. PMID:21517209
Visual processing in the central bee brain.
Paulk, Angelique C; Dacks, Andrew M; Phillips-Portillo, James; Fellous, Jean-Marc; Gronenberg, Wulfila
2009-08-12
Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur.
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Marple-Horvat, D E; Criado, J M; Armstrong, D M
1998-01-01
The discharge patterns of 166 lateral cerebellar neurones were studied in cats at rest and during visually guided stepping on a horizontal circular ladder. A hundred and twelve cells were tested against one or both of two visual stimuli: a brief full-field flash of light delivered during eating or rest, and a rung which moved up as the cat approached. Forty-five cells (40%) gave a short latency response to one or both of these stimuli. These visually responsive neurones were found in hemispheral cortex (rather than paravermal) and the lateral cerebellar nucleus (rather than nucleus interpositus).Thirty-seven cells (of 103 tested, 36%) responded to flash. The cortical visual response (mean onset latency 38 ms) was usually an increase in Purkinje cell discharge rate, of around 50 impulses s−1 and representing 1 or 2 additional spikes per trial (1.6 on average). The nuclear response to flash (mean onset latency 27 ms) was usually an increased discharge rate which was shorter lived and converted rapidly to a depression of discharge or return to control levels, so that there were on average only an additional 0.6 spikes per trial. A straightforward explanation of the difference between the cortical and nuclear response would be that the increased inhibitory Purkinje cell output cuts short the nuclear response.A higher proportion of cells responded to rung movement, sixteen of twenty-five tested (64%). Again most responded with increased discharge, which had longer latency than the flash response (first change in dentate output ca 60 ms after start of movement) and longer duration. Peak frequency changes were twice the size of those in response to flash, at 100 impulses s−1 on average and additional spikes per trial were correspondingly 3–4 times higher. Both cortical and nuclear responses were context dependent, being larger when the rung moved when the cat was closer than further away.A quarter of cells (20 of 84 tested, 24%) modulated their activity in advance of saccades, increasing their discharge rate. Four-fifths of these were non-reciprocally directionally selective. Saccade-related neurones were usually susceptible to other influences, i.e. their activity was not wholly explicable in terms of saccade parameters.Substantial numbers of visually responsive neurones also discharged in relation to stepping movements while other visually responsive neurones discharged in advance of saccadic eye movements. And more than half the cells tested were active in relation both to eye movements and to stepping movements. These combinations of properties qualify even individual cerebellar neurones to participate in the co-ordination of visually guided eye and limb movements. PMID:9490874
Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".
Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David
2013-01-23
Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.
ERIC Educational Resources Information Center
Branoff, Ted
1998-01-01
Reports on a study to determine whether the presence of coordinate axes in a test of spatial-visualization ability affects scores and response times on a mental-rotations task for students enrolled in undergraduate introductory graphic communications classes. Based on Pavios's dual-coding theory. Contains 36 references. (DDR)
Visual resource inventory and Imnaha Valley study: Hells Canyon National Recreation Area
David H. Blau; Michael C. Bowie; Frank Hunsaker
1979-01-01
Hells Canyon National Recreation Area was established by an Act of Congress in December 1975. At that time, the U.S. Forest Service, which administers most of the land included, was given the responsibility of developing a Comprehensive Management Plan for the NRA within five years. In order to minimize future visual degradation, the Forest Service planning team for...
Crossmodal Congruency Benefits of Tactile and Visual Signalling
2013-11-12
modal information format seemed to produce faster and more accurate performance. The question of learning complex tactile communication signals...SECURITY CLASSIFICATION OF: We conducted an experiment in which tactile messages were created based on five common military arm and hand signals. We...compared response times and accuracy rates of novice individuals responding to visual and tactile representations of these messages, which were
The impact of visual gaze direction on auditory object tracking.
Pomper, Ulrich; Chait, Maria
2017-07-05
Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.
Basic quantitative assessment of visual performance in patients with very low vision.
Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert
2010-02-01
A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.
Effect of chronic caffeine intake on choice reaction time, mood, and visual vigilance.
Judelson, Daniel A; Armstrong, Lawrence E; Sökmen, Bülent; Roti, Melissa W; Casa, Douglas J; Kellogg, Mark D
2005-08-07
The stimulatory effects of acute caffeine intake on choice reaction time, mood state, and visual vigilance are well established. Little research exists, however, on the effects of chronic caffeine ingestion on psychomotor tasks. Therefore, the purpose of this study was to evaluate the effects of 5 days of controlled caffeine intake on cognitive and psychomotor performance. Three groups of 20 healthy males (age=22+/-3 years, mass=75.4+/-7.9 kg, body fat percentage=11.2+/-5.1%) twice completed a battery of cognitive and psychomotor tasks: after 6 days of 3 mg.kg(-1) day(-1) caffeine equilibration (Day 6), and after 5 days of experimental (0 [G0], 3 [G3], or 6 [G6] mg.kg(-1) day(-1)) caffeine intake (Day 11). Groups were randomized and stratified for age, mass, and body composition; all procedures were double-blind. Cognitive analyses involved a visual four-choice reaction time test, a mood state questionnaire, and a visual vigilance task. Experimental chronic caffeine intake did not significantly alter the number of correct responses or the mean latency of response for either the four-choice reaction time or the visual vigilance tasks. The Vigor-Activity subset of the mood state questionnaire was significantly greater in G3 than G0 or G6 on Day 11. All other mood constructs were unaffected by caffeine intake. In conclusion, few cognitive and psychomotor differences existed after 5 days of controlled caffeine ingestion between subjects consuming 0, 3, or 6 mg.kg(-1) day(-1) of caffeine, suggesting that chronic caffeine intake (1) has few perceptible effects on cognitive and psychomotor well-being and (2) may lead to a tolerance to some aspects of caffeine's acute effects.
Cross-Villasana, Fernando; Finke, Kathrin; Hennig-Fast, Kristina; Kilian, Beate; Wiegand, Iris; Müller, Hermann Joseph; Möller, Hans-Jürgen; Töllner, Thomas
2015-07-15
Adults with attention-deficit/hyperactivity disorder (ADHD) exhibit slowed reaction times (RTs) in various attention tasks. The exact origins of this slowing, however, have not been established. Potential candidates are early sensory processes mediating the deployment of focal attention, stimulus response translation processes deciding upon the appropriate motor response, and motor processes generating the response. We combined mental chronometry (RT) measures of adult ADHD (n = 15) and healthy control (n = 15) participants with their lateralized event-related potentials during the performance of a visual search task to differentiate potential sources of slowing at separable levels of processing: the posterior contralateral negativity (PCN) was used to index focal-attentional selection times, while the lateralized readiness potentials synchronized to stimulus and response events were used to index the times taken for response selection and production, respectively. To assess the clinical relevance of event-related potentials, a correlation analysis between neural measures and subjective current and retrospective ADHD symptom ratings was performed. ADHD patients exhibited slower RTs than control participants, which were accompanied by prolonged PCN and lateralized readiness potentials synchronized to stimulus, but not lateralized readiness potentials synchronized to response events, latencies. Moreover, the PCN timing was positively correlated with ADHD symptom ratings. The behavioral RT slowing of adult ADHD patients was based on a summation of internal processing delays arising at perceptual and response selection stages; motor response production, by contrast, was not impaired. The correlation between PCN times and ADHD symptom ratings suggests that this brain signal may serve as a potential candidate for a neurocognitive endophenotype of ADHD. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Aspects of Motor Performance and Preacademic Learning.
ERIC Educational Resources Information Center
Feder, Katya; Kerr, Robert
1996-01-01
The Miller Assessment for Preschoolers (MAP) and a number/counting test were given to 50 4- and 5-year-olds. Low performance on counting was related to significantly slower average response time, overshoot movement time, and reaction time, indicating perceptual-motor difficulty. Low MAP scores indicated difficulty processing visual spatial…
Pirhofer-Walzl, Karin; Warrant, Eric; Barth, Friedrich G
2007-10-01
The photoreceptor cells of the nocturnal spider Cupiennius salei were investigated by intracellular electrophysiology. (1) The responses of photoreceptor cells of posterior median (PM) and anterior median (AM) eyes to short (2 ms) light pulses showed long integration times in the dark-adapted and shorter integration times in the light-adapted state. (2) At very low light intensities, the photoreceptors responded to single photons with discrete potentials, called bumps, of high amplitude (2-20 mV). When measured in profoundly dark-adapted photoreceptor cells of the PM eyes these bumps showed an integration time of 128 +/- 35 ms (n = 7) whereas in dark-adapted photoreceptor cells of AM eyes the integration time was 84 +/- 13 ms (n = 8), indicating that the AM eyes are intrinsically faster than the PM eyes. (3) Long integration times, which improve visual reliability in dim light, and large responses to single photons in the dark-adapted state, contribute to a high visual sensitivity in Cupiennius at night. This conclusion is underlined by a calculation of sensitivity that accounts for both anatomical and physiological characteristics of the eye.
Saar-Ashkenazy, Rotem; Shalev, Hadar; Kanthak, Magdalena K; Guez, Jonathan; Friedman, Alon; Cohen, Jonathan E
2015-08-30
Patients with posttraumatic stress disorder (PTSD) display abnormal emotional processing and bias towards emotional content. Most neurophysiological studies in PTSD found higher amplitudes of event-related potentials (ERPs) in response to trauma-related visual content. Here we aimed to characterize brain electrical activity in PTSD subjects in response to non-trauma-related emotion-laden pictures (positive, neutral and negative). A combined behavioral-ERP study was conducted in 14 severe PTSD patients and 14 controls. Response time in PTSD patients was slower compared with that in controls, irrespective to emotional valence. In both PTSD and controls, response time to negative pictures was slower compared with that to neutral or positive pictures. Upon ranking, both control and PTSD subjects similarly discriminated between pictures with different emotional valences. ERP analysis revealed three distinctive components (at ~300, ~600 and ~1000 ms post-stimulus onset) for emotional valence in control subjects. In contrast, PTSD patients displayed a similar brain response across all emotional categories, resembling the response of controls to negative stimuli. We interpret these findings as a brain-circuit response tendency towards negative overgeneralization in PTSD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Developmental lead exposure causes startle response deficits in zebrafish.
Rice, Clinton; Ghorai, Jugal K; Zalewski, Kathryn; Weber, Daniel N
2011-10-01
Lead (Pb(2+)) exposure continues to be an important concern for fish populations. Research is required to assess the long-term behavioral effects of low-level concentrations of Pb(2+) and the physiological mechanisms that control those behaviors. Newly fertilized zebrafish embryos (<2h post fertilization; hpf) were exposed to one of three concentrations of lead (as PbCl(2)): 0, 10, or 30 nM until 24 hpf. (1) Response to a mechanosensory stimulus: Individual larvae (168 hpf) were tested for response to a directional, mechanical stimulus. The tap frequency was adjusted to either 1 or 4 taps/s. Startle response was recorded at 1000 fps. Larvae responded in a concentration-dependent pattern for latency to reaction, maximum turn velocity, time to reach V(max) and escape time. With increasing exposure concentrations, a larger number of larvae failed to respond to even the initial tap and, for those that did respond, ceased responding earlier than control larvae. These differences were more pronounced at a frequency of 4 taps/s. (2) Response to a visual stimulus: Fish, exposed as embryos (2-24 hpf) to Pb(2+) (0-10 μM) were tested as adults under low light conditions (≈ 60 μW/m(2)) for visual responses to a rotating black bar. Visual responses were significantly degraded at Pb(2+) concentrations of 30 nM. These data suggest that zebrafish are viable models for short- and long-term sensorimotor deficits induced by acute, low-level developmental Pb(2+) exposures. Copyright © 2011 Elsevier B.V. All rights reserved.
Accessory stimulus modulates executive function during stepping task
Watanabe, Tatsunori; Koyama, Soichiro; Tanabe, Shigeo
2015-01-01
When multiple sensory modalities are simultaneously presented, reaction time can be reduced while interference enlarges. The purpose of this research was to examine the effects of task-irrelevant acoustic accessory stimuli simultaneously presented with visual imperative stimuli on executive function during stepping. Executive functions were assessed by analyzing temporal events and errors in the initial weight transfer of the postural responses prior to a step (anticipatory postural adjustment errors). Eleven healthy young adults stepped forward in response to a visual stimulus. We applied a choice reaction time task and the Simon task, which consisted of congruent and incongruent conditions. Accessory stimuli were randomly presented with the visual stimuli. Compared with trials without accessory stimuli, the anticipatory postural adjustment error rates were higher in trials with accessory stimuli in the incongruent condition and the reaction times were shorter in trials with accessory stimuli in all the task conditions. Analyses after division of trials according to whether anticipatory postural adjustment error occurred or not revealed that the reaction times of trials with anticipatory postural adjustment errors were reduced more than those of trials without anticipatory postural adjustment errors in the incongruent condition. These results suggest that accessory stimuli modulate the initial motor programming of stepping by lowering decision threshold and exclusively under spatial incompatibility facilitate automatic response activation. The present findings advance the knowledge of intersensory judgment processes during stepping and may aid in the development of intervention and evaluation tools for individuals at risk of falls. PMID:25925321
Stimulus onset predictability modulates proactive action control in a Go/No-go task
Berchicci, Marika; Lucci, Giuliana; Spinelli, Donatella; Di Russo, Francesco
2015-01-01
The aim of the study was to evaluate whether the presence/absence of visual cues specifying the onset of an upcoming, action-related stimulus modulates pre-stimulus brain activity, associated with the proactive control of goal-directed actions. To this aim we asked 12 subjects to perform an equal probability Go/No-go task with four stimulus configurations in two conditions: (1) uncued, i.e., without any external information about the timing of stimulus onset; and (2) cued, i.e., with external visual cues providing precise information about the timing of stimulus onset. During task both behavioral performance and event-related potentials (ERPs) were recorded. Behavioral results showed faster response times in the cued than uncued condition, confirming existing literature. ERPs showed novel results in the proactive control stage, that started about 1 s before the motor response. We observed a slow rising prefrontal positive activity, more pronounced in the cued than the uncued condition. Further, also pre-stimulus activity of premotor areas was larger in cued than uncued condition. In the post-stimulus period, the P3 amplitude was enhanced when the time of stimulus onset was externally driven, confirming that external cueing enhances processing of stimulus evaluation and response monitoring. Our results suggest that different pre-stimulus processing come into play in the two conditions. We hypothesize that the large prefrontal and premotor activities recorded with external visual cues index the monitoring of the external stimuli in order to finely regulate the action. PMID:25964751
Neuronal Response Gain Enhancement prior to Microsaccades.
Chen, Chih-Yang; Ignashchenkova, Alla; Thier, Peter; Hafed, Ziad M
2015-08-17
Neuronal response gain enhancement is a classic signature of the allocation of covert visual attention without eye movements. However, microsaccades continuously occur during gaze fixation. Because these tiny eye movements are preceded by motor preparatory signals well before they are triggered, it may be the case that a corollary of such signals may cause enhancement, even without attentional cueing. In six different macaque monkeys and two different brain areas previously implicated in covert visual attention (superior colliculus and frontal eye fields), we show neuronal response gain enhancement for peripheral stimuli appearing immediately before microsaccades. This enhancement occurs both during simple fixation with behaviorally irrelevant peripheral stimuli and when the stimuli are relevant for the subsequent allocation of covert visual attention. Moreover, this enhancement occurs in both purely visual neurons and visual-motor neurons, and it is replaced by suppression for stimuli appearing immediately after microsaccades. Our results suggest that there may be an obligatory link between microsaccade occurrence and peripheral selective processing, even though microsaccades can be orders of magnitude smaller than the eccentricities of peripheral stimuli. Because microsaccades occur in a repetitive manner during fixation, and because these eye movements reset neurophysiological rhythms every time they occur, our results highlight a possible mechanism through which oculomotor events may aid periodic sampling of the visual environment for the benefit of perception, even when gaze is prevented from overtly shifting. One functional consequence of such periodic sampling could be the magnification of rhythmic fluctuations of peripheral covert visual attention. Copyright © 2015 Elsevier Ltd. All rights reserved.
An experimental study of the nonlinear dynamic phenomenon known as wing rock
NASA Technical Reports Server (NTRS)
Arena, A. S., Jr.; Nelson, R. C.; Schiff, L. B.
1990-01-01
An experimental investigation into the physical phenomena associated with limit cycle wing rock on slender delta wings has been conducted. The model used was a slender flat plate delta wing with 80-deg leading edge sweep. The investigation concentrated on three main areas: motion characteristics obtained from time history plots, static and dynamic flow visualization of vortex position, and static and dynamic flow visualization of vortex breakdown. The flow visualization studies are correlated with model motion to determine the relationship between vortex position and vortex breakdown with the dynamic rolling moments. Dynamic roll moment coefficient curves reveal rate-dependent hysteresis, which drives the motion. Vortex position correlated with time and model motion show a time lag in the normal position of the upward moving wing vortex. This time lag may be the mechanism responsible for the hysteresis. Vortex breakdown is shown to have a damping effect on the motion.
Wagatsuma, Nobuhiko; Sakai, Ko
2017-01-01
Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision. PMID:28163688
Wagatsuma, Nobuhiko; Sakai, Ko
2016-01-01
Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision.
Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.
2017-01-01
The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924
Cues to Deception in an Interview Situation
ERIC Educational Resources Information Center
Harrison, Alberta A.; And Others
1978-01-01
Interviewees were secretly instructed to answer six questions honestly and six deceptively. Deceptive answers were hesitant and lengthy. Visual presence of the interviewer increased variability in verbal response time and decreased the length of response. Interviewers were able to discriminate between truth and falsehood. Increased hesitation and…
Peripheral visual response time and retinal luminance-area relations
NASA Technical Reports Server (NTRS)
Haines, R. F.
1975-01-01
Experiments were undertaken to elucidate the stimulus luminance-retinal area relationship that underlies response time (RT) behavior. Mean RT was significantly faster to stimuli imaged beyond about 70 deg of arc from the fovea when their luminance was increased by an amount equal to the foveal stimulus luminance multiplied by the cosine of the angle between the peripheral stimuli and the line of sight. This and additional data are discussed in relation to previous psychophysical data and to possible response mechanisms.
Reaction time in pilots at sustained acceleration of +4.5 G(z).
Truszczynski, Olaf; Wojtkowiak, Mieczyslaw; Lewkowicz, Rafal; Biernacki, Marcin P; Kowalczuk, Krzysztof
2013-08-01
Pilots flying at very high speed are exposed to the effects of prolonged accelerations while changing their flight path. The aim of this research was to assess the impact of sustained accelerations on the visual-motor response times of pilots and the acceleration tolerance level (ATL) as a measure of pilots' endurance to applied +G(z). The study involved 18 young pilots, 23-25 yr of age. The subjects' task was to quickly and accurately respond to the light stimuli presented on a light bar during exposure to acceleration at +4.5 G(z) and until reaching the ATL. Simple response time (SRT) measurements were performed using a visual-motor analysis system throughout the exposures, which allowed the assessment of a pilot's ATL. The pilots' ATL ranged from 270 to 366 s (Mean = 317.7 +/- 26.15 SD). The analysis of the SRT indicated a significant effect of duration of acceleration on the visual response time. The results of the post hoc comparisons showed that SRT increased with longer durations of the same level of +G(z) load and then decreased, reaching values similar to the controls. Exposure to prolonged acceleration of +4.5 G(z) significantly increases SRT. There was no statistically significant difference in SRT between the pilots with "short" and "long" time exposures. A pilot's SRT during a prolonged +4.5 G(z) exposure could be a reliable indicator of pilot G performance in the fast jet. Deterioration of SRT may be used to predict imminent +G(z) endurance limits between pilots with widely varying endurance abilities.
Simultaneous EEG/fMRI analysis of the resonance phenomena in steady-state visual evoked responses.
Bayram, Ali; Bayraktaroglu, Zubeyir; Karahan, Esin; Erdogan, Basri; Bilgic, Basar; Ozker, Muge; Kasikci, Itir; Duru, Adil D; Ademoglu, Ahmet; Oztürk, Cengizhan; Arikan, Kemal; Tarhan, Nevzat; Demiralp, Tamer
2011-04-01
The stability of the steady-state visual evoked potentials (SSVEPs) across trials and subjects makes them a suitable tool for the investigation of the visual system. The reproducible pattern of the frequency characteristics of SSVEPs shows a global amplitude maximum around 10 Hz and additional local maxima around 20 and 40 Hz, which have been argued to represent resonant behavior of damped neuronal oscillators. Simultaneous electroencephalogram/functional magnetic resonance imaging (EEG/fMRI) measurement allows testing of the resonance hypothesis about the frequency-selective increases in SSVEP amplitudes in human subjects, because the total synaptic activity that is represented in the fMRI-Blood Oxygen Level Dependent (fMRI-BOLD) response would not increase but get synchronized at the resonance frequency. For this purpose, 40 healthy volunteers were visually stimulated with flickering light at systematically varying frequencies between 6 and 46 Hz, and the correlations between SSVEP amplitudes and the BOLD responses were computed. The SSVEP frequency characteristics of all subjects showed 3 frequency ranges with an amplitude maximum in each of them, which roughly correspond to alpha, beta and gamma bands of the EEG. The correlation maps between BOLD responses and SSVEP amplitude changes across the different stimulation frequencies within each frequency band showed no significant correlation in the alpha range, while significant correlations were obtained in the primary visual area for the beta and gamma bands. This non-linear relationship between the surface recorded SSVEP amplitudes and the BOLD responses of the visual cortex at stimulation frequencies around the alpha band supports the view that a resonance at the tuning frequency of the thalamo-cortical alpha oscillator in the visual system is responsible for the global amplitude maximum of the SSVEP around 10 Hz. Information gained from the SSVEP/fMRI analyses in the present study might be extrapolated to the EEG/fMRI analysis of the transient event-related potentials (ERPs) in terms of expecting more reliable and consistent correlations between EEG and fMRI responses, when the analyses are carried out on evoked or induced oscillations (spectral perturbations) in separate frequency bands instead of the time-domain ERP peaks.
Moderate perinatal thyroid hormone insufficiency alters visual system function in adult rats.
Boyes, William K; Degn, Laura; George, Barbara Jane; Gilbert, Mary E
2018-04-21
Thyroid hormone (TH) is critical for many aspects of neurodevelopment and can be disrupted by a variety of environmental contaminants. Sensory systems, including audition and vision are vulnerable to TH insufficiencies, but little data are available on visual system development at less than severe levels of TH deprivation. The goal of the current experiments was to explore dose-response relations between graded levels of TH insufficiency during development and the visual function of adult offspring. Pregnant Long Evans rats received 0 or 3 ppm (Experiment 1), or 0, 1, 2, or 3 ppm (Experiment 2) of propylthiouracil (PTU), an inhibitor of thyroid hormone synthesis, in drinking water from gestation day (GD) 6 to postnatal day (PN) 21. Treatment with PTU caused dose-related reductions of serum T4, with recovery on termination of exposure, and euthyroidism by the time of visual function testing. Tests of retinal (electroretinograms; ERGs) and visual cortex (visual evoked potentials; VEPs) function were assessed in adult offspring. Dark-adapted ERG a-waves, reflecting rod photoreceptors, were increased in amplitude by PTU. Light-adapted green flicker ERGs, reflecting M-cone photoreceptors, were reduced by PTU exposure. UV-flicker ERGs, reflecting S-cones, were not altered. Pattern-elicited VEPs were significantly reduced by 2 and 3 ppm PTU across a range of stimulus contrast values. The slope of VEP amplitude-log contrast functions was reduced by PTU, suggesting impaired visual contrast gain. Visual contrast gain primarily reflects function of visual cortex, and is responsible for adjusting sensitivity of perceptual mechanisms in response to changing visual scenes. The results indicate that moderate levels of pre-and post-natal TH insufficiency led to alterations in visual function of adult rats, including both retinal and visual cortex sites of dysfunction. Copyright © 2018. Published by Elsevier B.V.
The influence of visual and vestibular orientation cues in a clock reading task.
Davidenko, Nicolas; Cheong, Yeram; Waterman, Amanda; Smith, Jacob; Anderson, Barrett; Harmon, Sarah
2018-05-23
We investigated how performance in the real-life perceptual task of analog clock reading is influenced by the clock's orientation with respect to egocentric, gravitational, and visual-environmental reference frames. In Experiment 1, we designed a simple clock-reading task and found that observers' reaction time to correctly tell the time depends systematically on the clock's orientation. In Experiment 2, we dissociated egocentric from environmental reference frames by having participants sit upright or lie sideways while performing the task. We found that both reference frames substantially contribute to response times in this task. In Experiment 3, we placed upright or rotated participants in an upright or rotated immersive virtual environment, which allowed us to further dissociate vestibular from visual cues to the environmental reference frame. We found evidence of environmental reference frame effects only when visual and vestibular cues were aligned. We discuss the implications for the design of remote and head-mounted displays. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparative case study between D3 and highcharts on lustre data visualization
NASA Astrophysics Data System (ADS)
ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott
2013-12-01
One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).
McFadyen, Bradford J; Cantin, Jean-François; Swaine, Bonnie; Duchesneau, Guylaine; Doyon, Julien; Dumas, Denyse; Fait, Philippe
2009-09-01
To study the effects of sensory modality of simultaneous tasks during walking with and without obstacles after moderate to severe traumatic brain injury (TBI). Group comparison study. Gait analysis laboratory within a postacute rehabilitation facility. Volunteer sample (N=18). Persons with moderate to severe TBI (n=11) (9 men, 3 women; age, 37.56+/-13.79 y) and a comparison group (n=7) of subjects without neurologic problems matched on average for body mass index and age (4 men, 3 women; age, 39.19+/-17.35 y). Not applicable. Magnitudes and variability for walking speeds, foot clearance margins (ratio of foot clearance distance to obstacle height), and response reaction times (both direct and as a relative cost because of obstacle avoidance). The TBI group had well-recovered walking speeds and a general ability to avoid obstacles. However, these subjects did show lower trail limb toe clearances (P=.003) across all conditions. Response reaction times to the Stroop tasks were longer in general for the TBI group (P=.017), and this group showed significant increases in response reaction times for the visual modality within the more challenging obstacle avoidance task that was not observed for control subjects. A measure of multitask costs related to differences in response reaction times between obstructed and unobstructed trials also only showed increased attention costs for the visual over the auditory stimuli for the TBI group (P=.002). Mobility is a complex construct, and the present results provide preliminary findings that, even after good locomotor recovery, subjects with moderate to severe TBI show residual locomotor deficits in multitasking. Furthermore, our results suggest that sensory modality is important, and greater multitask costs occur during sensory competition (ie, visual interference).
Supèr, Hans; Spekreijse, Henk; Lamme, Victor A F
2003-06-26
To look at an object its position in the visual scene has to be localized and subsequently appropriate oculo-motor behavior needs to be initiated. This kind of behavior is largely controlled by the cortical executive system, such as the frontal eye field. In this report, we analyzed neural activity in the visual cortex in relation to oculo-motor behavior. We show that in a figure-ground detection task, the strength of late modulated activity in the primary visual cortex correlates with the saccade latency. We propose that this may indicate that the variability of reaction times in the detection of a visual stimulus is reflected in low-level visual areas as well as in high-level areas.
Mangun, G R; Buck, L A
1998-03-01
This study investigated the simple reaction time (RT) and event-related potential (ERP) correlates of biasing attention towards a location in the visual field. RTs and ERPs were recorded to stimuli flashed randomly and with equal probability to the left and right visual hemifields in the three blocked, covert attention conditions: (i) attention divided equally to left and right hemifield locations; (ii) attention biased towards the left location; or (iii) attention biased towards the right location. Attention was biased towards left or right by instructions to the subjects, and responses were required to all stimuli. Relative to the divided attention condition, RTs were significantly faster for targets occurring where more attention was allocated (benefits), and slower to targets where less attention was allocated (costs). The early P1 (100-140 msec) component over the lateral occipital scalp regions showed attentional benefits. There were no amplitude modulations of the occipital N1 (125-180 msec) component with attention. Between 200 and 500 msec latency, a late positive deflection (LPD) showed both attentional costs and benefits. The behavioral findings show that when sufficiently induced to bias attention, human observers demonstrate RT benefits as well as costs. The corresponding P1 benefits suggest that the RT benefits of spatial attention may arise as the result of modulations of visual information processing in the extrastriate visual cortex.
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
NASA Astrophysics Data System (ADS)
Ding, R.; He, T.
2017-12-01
With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people depended on various factors, including earthquake occurrence time and location of epicenter. This research hopes to continue to build upon the success of the prototype system in order to improve and extend the system to support the analysis of earthquakes and other types of natural hazard events.
The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words
Hoedemaker, Renske S.; Gordon, Peter C.
2016-01-01
In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394
The onset and time course of semantic priming during rapid recognition of visual words.
Hoedemaker, Renske S; Gordon, Peter C
2017-05-01
In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Childhood blindness: a new form for recording causes of visual loss in children.
Gilbert, C.; Foster, A.; Négrel, A. D.; Thylefors, B.
1993-01-01
The new standardized form for recording the causes of visual loss in children is accompanied by coding instructions and by a database for statistical analysis. The aim is to record the causes of childhood visual loss, with an emphasis on preventable and treatable causes, so that appropriate control measures can be planned. With this standardized methodology, it will be possible to monitor the changing patterns of childhood blindness over a period of time in response to changes in health care services, specific interventions, and socioeconomic development. PMID:8261552
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
Hogarth, Lee; Dickinson, Anthony; Duka, Theodora
2003-08-01
Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.
Visual Processing in Rapid-Chase Systems: Image Processing, Attention, and Awareness
Schmidt, Thomas; Haberkamp, Anke; Veltkamp, G. Marina; Weber, Andreas; Seydell-Greenwald, Anna; Schmidt, Filipp
2011-01-01
Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness. PMID:21811484
Laminar circuit organization and response modulation in mouse visual cortex
Olivas, Nicholas D.; Quintanar-Zilinskas, Victor; Nenadic, Zoran; Xu, Xiangmin
2012-01-01
The mouse has become an increasingly important animal model for visual system studies, but few studies have investigated local functional circuit organization of mouse visual cortex. Here we used our newly developed mapping technique combining laser scanning photostimulation (LSPS) with fast voltage-sensitive dye (VSD) imaging to examine the spatial organization and temporal dynamics of laminar circuit responses in living slice preparations of mouse primary visual cortex (V1). During experiments, LSPS using caged glutamate provided spatially restricted neuronal activation in a specific cortical layer, and evoked responses from the stimulated layer to its functionally connected regions were detected by VSD imaging. In this study, we first provided a detailed analysis of spatiotemporal activation patterns at specific V1 laminar locations and measured local circuit connectivity. Then we examined the role of cortical inhibition in the propagation of evoked cortical responses by comparing circuit activity patterns in control and in the presence of GABAa receptor antagonists. We found that GABAergic inhibition was critical in restricting layer-specific excitatory activity spread and maintaining topographical projections. In addition, we investigated how AMPA and NMDA receptors influenced cortical responses and found that blocking AMPA receptors abolished interlaminar functional projections, and the NMDA receptor activity was important in controlling visual cortical circuit excitability and modulating activity propagation. The NMDA receptor antagonist reduced neuronal population activity in time-dependent and laminar-specific manners. Finally, we used the quantitative information derived from the mapping experiments and presented computational modeling analysis of V1 circuit organization. Taken together, the present study has provided important new information about mouse V1 circuit organization and response modulation. PMID:23060751
Posse, Stefan; Ackley, Elena; Mutihac, Radu; Rick, Jochen; Shane, Matthew; Murray-Krezan, Cristina; Zaitsev, Maxim; Speck, Oliver
2012-01-01
In this study, a new approach to high-speed fMRI using multi-slab echo-volumar imaging (EVI) is developed that minimizes geometrical image distortion and spatial blurring, and enables nonaliased sampling of physiological signal fluctuation to increase BOLD sensitivity compared to conventional echo-planar imaging (EPI). Real-time fMRI using whole brain 4-slab EVI with 286 ms temporal resolution (4 mm isotropic voxel size) and partial brain 2-slab EVI with 136 ms temporal resolution (4×4×6 mm3 voxel size) was performed on a clinical 3 Tesla MRI scanner equipped with 12-channel head coil. Four-slab EVI of visual and motor tasks significantly increased mean (visual: 96%, motor: 66%) and maximum t-score (visual: 263%, motor: 124%) and mean (visual: 59%, motor: 131%) and maximum (visual: 29%, motor: 67%) BOLD signal amplitude compared with EPI. Time domain moving average filtering (2 s width) to suppress physiological noise from cardiac and respiratory fluctuations further improved mean (visual: 196%, motor: 140%) and maximum (visual: 384%, motor: 200%) t-scores and increased extents of activation (visual: 73%, motor: 70%) compared to EPI. Similar sensitivity enhancement, which is attributed to high sampling rate at only moderately reduced temporal signal-to-noise ratio (mean: − 52%) and longer sampling of the BOLD effect in the echo-time domain compared to EPI, was measured in auditory cortex. Two-slab EVI further improved temporal resolution for measuring task-related activation and enabled mapping of five major resting state networks (RSNs) in individual subjects in 5 min scans. The bilateral sensorimotor, the default mode and the occipital RSNs were detectable in time frames as short as 75 s. In conclusion, the high sampling rate of real-time multi-slab EVI significantly improves sensitivity for studying the temporal dynamics of hemodynamic responses and for characterizing functional networks at high field strength in short measurement times. PMID:22398395
Auditory and Visual Interhemispheric Communication in Musicians and Non-Musicians
Woelfle, Rebecca; Grahn, Jessica A.
2013-01-01
The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer. PMID:24386382
Reliability of a computer-based system for measuring visual performance skills.
Erickson, Graham B; Citek, Karl; Cove, Michelle; Wilczek, Jennifer; Linster, Carolyn; Bjarnason, Brendon; Langemo, Nathan
2011-09-01
Athletes have demonstrated better visual abilities than nonathletes. A vision assessment for an athlete should include methods to evaluate the quality of visual performance skills in the most appropriate, accurate, and repeatable manner. This study determines the reliability of the visual performance measures assessed with a computer-based system, known as the Nike Sensory Station. One hundred twenty-five subjects (56 men, 69 women), age 18 to 30, completed Phase I of the study. Subjects attended 2 sessions, separated by at least 1 week, in which identical protocols were followed. Subjects completed the following assessments: Visual Clarity, Contrast Sensitivity, Depth Perception, Near-Far Quickness, Target Capture, Perception Span, Eye-Hand Coordination, Go/No Go, and Reaction Time. An additional 36 subjects (20 men, 16 women), age 22 to 35, completed Phase II of the study involving modifications to the equipment, instructions, and protocols from Phase I. Results show no significant change in performance over time on assessments of Visual Clarity, Contrast Sensitivity, Depth Perception, Target Capture, Perception Span, and Reaction Time. Performance did improve over time for Near-Far Quickness, Eye-Hand Coordination, and Go/No Go. The results of this study show that many of the Nike Sensory Station assessments show repeatability and no learning effect over time. The measures that did improve across sessions show an expected learning effect caused by the motor response characteristics being measured. Copyright © 2011 American Optometric Association. Published by Elsevier Inc. All rights reserved.
An evaluation of space time cube representation of spatiotemporal patterns.
Kristensson, Per Ola; Dahlbäck, Nils; Anundi, Daniel; Björnstad, Marius; Gillberg, Hanna; Haraldsson, Jonas; Mårtensson, Ingrid; Nordvall, Mathias; Ståhl, Josefine
2009-01-01
Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.
Qadri, Muhammad A J; Leonard, Kevin; Cook, Robert G; Kelly, Debbie M
2018-02-15
Clark's nutcrackers exhibit remarkable cache recovery behavior, remembering thousands of seed locations over the winter. No direct laboratory test of their visual memory capacity, however, has yet been performed. Here, two nutcrackers were tested in an operant procedure used to measure different species' visual memory capacities. The nutcrackers were incrementally tested with an ever-expanding pool of pictorial stimuli in a two-alternative discrimination task. Each picture was randomly assigned to either a right or a left choice response, forcing the nutcrackers to memorize each picture-response association. The nutcrackers' visual memorization capacity was estimated at a little over 500 pictures, and the testing suggested effects of primacy, recency, and memory decay over time. The size of this long-term visual memory was less than the approximately 800-picture capacity established for pigeons. These results support the hypothesis that nutcrackers' spatial memory is a specialized adaptation tied to their natural history of food-caching and recovery, and not to a larger long-term, general memory capacity. Furthermore, despite millennia of separate and divergent evolution, the mechanisms of visual information retention seem to reflect common memory systems of differing capacities across the different species tested in this design.
Reduced BOLD response to periodic visual stimulation.
Parkes, Laura M; Fries, Pascal; Kerskens, Christian M; Norris, David G
2004-01-01
The blood oxygenation level-dependent (BOLD) response to entrained neuronal firing in the human visual cortex and lateral geniculate nuclei was investigated. Periodic checkerboard flashes at a range of frequencies (4-20 Hz) were used to drive the visual cortex neurons into entrained oscillatory firing. This is compared to a checkerboard flashing aperiodically, with the same average number of flashes per unit time. A magnetoencephalography (MEG) measurement was made to confirm that the periodic paradigm elicited entrainment. We found that for frequencies of 10 and 15 Hz, the periodic stimulus gave a smaller BOLD response than for the aperiodic stimulus. Detailed investigation at 15 Hz showed that the aperiodic stimulus gave a similar BOLD increase regardless of the magnitude of jitter (+/-17 ms compared to +/-33 ms), indicating that flashes need to be precise to at least 17 ms to maintain entrainment. This is also evidence that for aperiodic stimuli, the amplitude of the BOLD response ordinarily reflects the total number of flashes per unit time, irrespective of the precise spacing between them, suggesting that entrainment is the main cause of the BOLD reduction in the periodic condition. The results indicate that, during entrainment, there is a reduction in the neuronal metabolic demand. We suggest that because of the selective frequency band of this effect, it could be connected to synchronised reverberations around an internal feedback loop.
Spatiotemporal oscillatory dynamics of visual selective attention during a flanker task.
McDermott, Timothy J; Wiesman, Alex I; Proskovec, Amy L; Heinrichs-Graham, Elizabeth; Wilson, Tony W
2017-08-01
The flanker task is a test of visual selective attention that has been widely used to probe error monitoring, response conflict, and related constructs. However, to date, few studies have focused on the selective attention component of this task and imaged the underlying oscillatory dynamics serving task performance. In this study, 21 healthy adults successfully completed an arrow-based version of the Eriksen flanker task during magnetoencephalography (MEG). All MEG data were pre-processed and transformed into the time-frequency domain. Significant oscillatory brain responses were imaged using a beamforming approach, and voxel time series were extracted from the peak responses to identify the temporal dynamics. Across both congruent and incongruent flanker conditions, our results indicated robust decreases in alpha (9-12Hz) activity in medial and lateral occipital regions, bilateral parietal cortices, and cerebellar areas during task performance. In parallel, increases in theta (3-7Hz) oscillatory activity were detected in dorsal and ventral frontal regions, and the anterior cingulate. As per conditional effects, stronger alpha responses (i.e., greater desynchronization) were observed in parietal, occipital, and cerebellar cortices during incongruent relative to congruent trials, whereas the opposite pattern emerged for theta responses (i.e., synchronization) in the anterior cingulate, left dorsolateral prefrontal, and ventral prefrontal cortices. Interestingly, the peak latency of theta responses in these latter brain regions was significantly correlated with reaction time, and may partially explain the amplitude difference observed between congruent and incongruent trials. Lastly, whole-brain exploratory analyses implicated the frontal eye fields, right temporoparietal junction, and premotor cortices. These findings suggest that regions of both the dorsal and ventral attention networks contribute to visual selective attention processes during incongruent trials, and that such differential processes are transient and fully completed shortly after the behavioral response in most trials. Copyright © 2017 Elsevier Inc. All rights reserved.
CUE USAGE IN VOLLEYBALL: A TIME COURSE COMPARISON OF ELITE, INTERMEDIATE AND NOVICE FEMALE PLAYERS
Vaeyens, R; Zeuwts, L; Philippaerts, R; Lenoir, M
2014-01-01
This study compared visual search strategies in adult female volleyball players of three levels. Video clips of the attack of the opponent team were presented on a large screen and participants reacted to the final pass before the spike. Reaction time, response accuracy and eye movement patterns were measured. Elite players had the highest response accuracy (97.50 ± 3.5%) compared to the intermediate (91.50 ± 4.7%) and novice players (83.50 ± 17.6%; p<0.05). Novices had a remarkably high range of reaction time but no significant differences were found in comparison to the reaction time of elite and intermediate players. In general, the three groups showed similar gaze behaviour with the apparent use of visual pivots at moments of reception and final pass. This confirms the holistic model of image perception for volleyball and suggests that expert players extract more information from parafoveal regions. PMID:25609887
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1976-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Kato, Shigeki; Kuramochi, Masahito; Kobayashi, Kenta; Fukabori, Ryoji; Okada, Kana; Uchigashima, Motokazu; Watanabe, Masahiko; Tsutsui, Yuji; Kobayashi, Kazuto
2011-11-23
The dorsal striatum receives converging excitatory inputs from diverse brain regions, including the cerebral cortex and the intralaminar/midline thalamic nuclei, and mediates learning processes contributing to instrumental motor actions. However, the roles of each striatal input pathway in these learning processes remain uncertain. We developed a novel strategy to target specific neural pathways and applied this strategy for studying behavioral roles of the pathway originating from the parafascicular nucleus (PF) and projecting to the dorsolateral striatum. A highly efficient retrograde gene transfer vector encoding the recombinant immunotoxin (IT) receptor was injected into the dorsolateral striatum in mice to express the receptor in neurons innervating the striatum. IT treatment into the PF of the vector-injected animals caused a selective elimination of neurons of the PF-derived thalamostriatal pathway. The elimination of this pathway impaired the response selection accuracy and delayed the motor response in the acquisition of a visual cue-dependent discrimination task. When the pathway elimination was induced after learning acquisition, it disturbed the response accuracy in the task performance with no apparent change in the response time. The elimination did not influence spontaneous locomotion, methamphetamine-induced hyperactivity, and motor skill learning that demand the function of the dorsal striatum. These results demonstrate that thalamostriatal projection derived from the PF plays essential roles in the acquisition and execution of discrimination learning in response to sensory stimulus. The temporal difference in the pathway requirement for visual discrimination suggests a stage-specific role of thalamostriatal pathway in the modulation of response time of learned motor actions.
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
2017-07-01
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Peripheral visual response time to colored stimuli imaged on the horizontal meridian
NASA Technical Reports Server (NTRS)
Haines, R. F.; Gross, M. M.; Nylen, D.; Dawson, L. M.
1974-01-01
Two male observers were administered a binocular visual response time task to small (45 min arc), flashed, photopic stimuli at four dominant wavelengths (632 nm red; 583 nm yellow; 526 nm green; 464 nm blue) imaged across the horizontal retinal meridian. The stimuli were imaged at 10 deg arc intervals from 80 deg left to 90 deg right of fixation. Testing followed either prior light adaptation or prior dark adaptation. Results indicated that mean response time (RT) varies with stimulus color. RT is faster to yellow than to blue and green and slowest to red. In general, mean RT was found to increase from fovea to periphery for all four colors, with the curve for red stimuli exhibiting the most rapid positive acceleration with increasing angular eccentricity from the fovea. The shape of the RT distribution across the retina was also found to depend upon the state of light or dark adaptation. The findings are related to previous RT research and are discussed in terms of optimizing the color and position of colored displays on instrument panels.
Impact of language on development of auditory-visual speech perception.
Sekiyama, Kaoru; Burnham, Denis
2008-03-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.
Tremblay, Emmanuel; Vannasing, Phetsamone; Roy, Marie-Sylvie; Lefebvre, Francine; Kombate, Damelan; Lassonde, Maryse; Lepore, Franco; McKerral, Michelle; Gallagher, Anne
2014-01-01
In the past decades, multiple studies have been interested in developmental patterns of the visual system in healthy infants. During the first year of life, differential maturational changes have been observed between the Magnocellular (P) and the Parvocellular (P) visual pathways. However, few studies investigated P and M system development in infants born prematurely. The aim of the present study was to characterize P and M system maturational differences between healthy preterm and fullterm infants through a critical period of visual maturation: the first year of life. Using a cross-sectional design, high-density electroencephalogram (EEG) was recorded in 31 healthy preterms and 41 fullterm infants of 3, 6, or 12 months (corrected age for premature babies). Three visual stimulations varying in contrast and spatial frequency were presented to stimulate preferentially the M pathway, the P pathway, or both systems simultaneously during EEG recordings. Results from early visual evoked potentials in response to the stimulation that activates simultaneously both systems revealed longer N1 latencies and smaller P1 amplitudes in preterm infants compared to fullterms. Moreover, preterms showed longer N1 and P1 latencies in response to stimuli assessing the M pathway at 3 months. No differences between preterms and fullterms were found when using the preferential P system stimulation. In order to identify the cerebral generator of each visual response, distributed source analyses were computed in 12-month-old infants using LORETA. Source analysis demonstrated an activation of the parietal dorsal region in fullterm infants, in response to the preferential M pathway, which was not seen in the preterms. Overall, these findings suggest that the Magnocellular pathway development is affected in premature infants. Although our VEP results suggest that premature children overcome, at least partially, the visual developmental delay with time, source analyses reveal abnormal brain activation of the Magnocellular pathway at 12 months of age. PMID:25268226
Milner, A D; Paulignan, Y; Dijkerman, H C; Michel, F; Jeannerod, M
1999-11-07
We tested a patient (A. T.) with bilateral brain damage to the parietal lobes, whose resulting 'optic ataxia' causes her to make large pointing errors when asked to locate single light emitting diodes presented in her visual field. We report here that, unlike normal individuals, A. T.'s pointing accuracy improved when she was required to wait for 5 s before responding. This counter-intuitive result is interpreted as reflecting the very brief time-scale on which visuomotor control systems in the superior parietal lobe operate. When an immediate response was required, A. T.'s damaged visuomotor system caused her to make large errors; but when a delay was required, a different, more flexible, visuospatial coding system--presumably relatively intact in her brain--came into play, resulting in much more accurate responses. The data are consistent with a dual processing theory whereby motor responses made directly to visual stimuli are guided by a dedicated system in the superior parietal and premotor cortices, while responses to remembered stimuli depend on perceptual processing and may thus crucially involve processing within the temporal neocortex.
Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
2013-01-01
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
The effect of sildenafil citrate (Viagra) on visual sensitivity.
Stockman, Andrew; Sharpe, Lindsay T; Tufail, Adnan; Kell, Philip D; Ripamonti, Caterina; Jeffery, Glen
2007-06-08
The erectile dysfunction medicine sildenafil citrate (Viagra) inhibits phosphodiesterase type 6 (PDE6), an essential enzyme involved in the activation and modulation of the phototransduction cascade. Although Viagra might thus be expected to impair visual performance, reports of deficits following its ingestion have so far been largely inconclusive or anecdotal. Here, we adopt tests sensitive to the slowing of the visual response likely to result from the inhibition of PDE6. We measured temporal acuity (critical fusion frequency) and modulation sensitivity in four subjects before and after the ingestion of a 100-mg dose of Viagra under conditions chosen to isolate the responses of either their short-wavelength-sensitive (S-) cone photoreceptors or their long- and middle-wavelength-sensitive (L- and M-) cones. When vision was mediated by S-cones, all subjects exhibited some statistically significant losses in sensitivity, which varied from mild to moderate. The two individuals who showed the largest S-cone sensitivity losses also showed comparable losses when their vision was mediated by the L- and M-cones. Some of the losses appear to increase with frequency, which is broadly consistent with Viagra interfering with the ability of PDE6 to shorten the time over which the visual system integrates signals as the light level increases. However, others appear to represent a roughly frequency-independent attenuation of the visual signal, which might also be consistent with Viagra lengthening the integration time (because it has the effect of increasing the effectiveness of steady background lights), but such changes are also open to other interpretations. Even for the more affected observers, however, Viagra is unlikely to impair common visual tasks, except under conditions of reduced visibility when objects are already near visual threshold.
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.
Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J
2013-01-01
Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).
Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete
Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.
2013-01-01
Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
The shaping of information by visual metaphors.
Ziemkiewicz, Caroline; Kosara, Robert
2008-01-01
The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results (based on correctness and response time) suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization.
NASA Technical Reports Server (NTRS)
Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)
1973-01-01
An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.
Reliability of VEP Recordings Using Chronically Implanted Screw Electrodes in Mice
Makowiecki, Kalina; Garrett, Andrew; Clark, Vince; Graham, Stuart L.; Rodger, Jennifer
2015-01-01
Purpose: Visual evoked potentials (VEPs) are widely used to objectively assess visual system function in animal models of ophthalmological diseases. Although use of chronically implanted electrodes is common in longitudinal VEP studies using rodent models, reliability of recordings over time has not been assessed. We compared VEPs 1 and 7 days after electrode implantation in the adult mouse. We also examined stimulus-independent changes over time, by assessing electroencephalogram (EEG) power and approximate entropy of the EEG signal. Methods: Stainless steel screws (600-μm diameter) were implanted into the skull overlying the right visual cortex and the orbitofrontal cortex of adult mice (C57Bl/6J, n = 7). Animals were reanesthetized 1 and 7 days after implantation to record VEP responses (flashed gratings) and EEG activity. Brain sections were stained for glial activation (GFAP) and cell death (TUNEL). Results: Reliability analysis, using intraclass correlation coefficients, showed VEP recordings had high reliability within the same session, regardless of time after electrode implantation and peak latencies and approximate entropy of the EEG did not change significantly with time. However, there was poorer reliability between recordings obtained on different days, and a significant decrease in VEP amplitudes and EEG power. This amplitude decrease could be normalized by scaling to EEG power (within-subjects). Furthermore, glial activation was present at both time points but there was no evidence of cell death. Conclusions: These results indicate that VEP responses can be reliably recorded even after a relatively short recovery period but decrease response peak amplitude over time. Although scaling the VEP trace to EEG power normalized this decrease, our results highlight that time-dependent cortical excitability changes are an important consideration in longitudinal VEP studies. Translational Relevance: This study shows changes in VEP characteristics over time in chronically implanted mice. Thus, future preclinical longitudinal studies should consider time in addition to amplitude and latency when designing and interpreting research. PMID:25938003
A visual salience map in the primate frontal eye field.
Thompson, Kirk G; Bichot, Narcisse P
2005-01-01
Models of attention and saccade target selection propose that within the brain there is a topographic map of visual salience that combines bottom-up and top-down influences to identify locations for further processing. The results of a series of experiments with monkeys performing visual search tasks have identified a population of frontal eye field (FEF) visually responsive neurons that exhibit all of the characteristics of a visual salience map. The activity of these FEF neurons is not sensitive to specific features of visual stimuli; but instead, their activity evolves over time to select the target of the search array. This selective activation reflects both the bottom-up intrinsic conspicuousness of the stimuli and the top-down knowledge and goals of the viewer. The peak response within FEF specifies the target for the overt gaze shift. However, the selective activity in FEF is not in itself a motor command because the magnitude of activation reflects the relative behavioral significance of the different stimuli in the visual scene and occurs even when no saccade is made. Identifying a visual salience map in FEF validates the theoretical concept of a salience map in many models of attention. In addition, it strengthens the emerging view that FEF is not only involved in producing overt gaze shifts, but is also important for directing covert spatial attention.
Chernyshev, Boris V; Pronko, Platon K; Stroganova, Tatiana A
2016-01-01
Detection of illusory contours (ICs) such as Kanizsa figures is known to depend primarily upon the lateral occipital complex. Yet there is no universal agreement on the role of the primary visual cortex in this process; some existing evidence hints that an early stage of the visual response in V1 may involve relative suppression to Kanizsa figures compared with controls. Iso-oriented luminance borders, which are responsible for Kanizsa illusion, may evoke surround suppression in V1 and adjacent areas leading to the reduction in the initial response to Kanizsa figures. We attempted to test the existence, as well as to find localization and timing of the early suppression effect produced by Kanizsa figures in adult nonclinical human participants. We used two sizes of visual stimuli (4.5 and 9.0°) in order to probe the effect at two different levels of eccentricity; the stimuli were presented centrally in passive viewing conditions. We recorded magnetoencephalogram, which is more sensitive than electroencephalogram to activity originating from V1 and V2 areas. We restricted our analysis to the medial occipital area and the occipital pole, and to a 40-120 ms time window after the stimulus onset. By applying threshold-free cluster enhancement technique in combination with permutation statistics, we were able to detect the inverted IC effect-a relative suppression of the response to the Kanizsa figures compared with the control stimuli. The current finding is highly compatible with the explanation involving surround suppression evoked by iso-oriented collinear borders. The effect may be related to the principle of sparse coding, according to which V1 suppresses representations of inner parts of collinear assemblies as being informationally redundant. Such a mechanism is likely to be an important preliminary step preceding object contour detection.
Neuromorphic VLSI vision system for real-time texture segregation.
Shimonomura, Kazuhiro; Yagi, Tetsuya
2008-10-01
The visual system of the brain can perceive an external scene in real-time with extremely low power dissipation, although the response speed of an individual neuron is considerably lower than that of semiconductor devices. The neurons in the visual pathway generate their receptive fields using a parallel and hierarchical architecture. This architecture of the visual cortex is interesting and important for designing a novel perception system from an engineering perspective. The aim of this study is to develop a vision system hardware, which is designed inspired by a hierarchical visual processing in V1, for real time texture segregation. The system consists of a silicon retina, orientation chip, and field programmable gate array (FPGA) circuit. The silicon retina emulates the neural circuits of the vertebrate retina and exhibits a Laplacian-Gaussian-like receptive field. The orientation chip selectively aggregates multiple pixels of the silicon retina in order to produce Gabor-like receptive fields that are tuned to various orientations by mimicking the feed-forward model proposed by Hubel and Wiesel. The FPGA circuit receives the output of the orientation chip and computes the responses of the complex cells. Using this system, the neural images of simple cells were computed in real-time for various orientations and spatial frequencies. Using the orientation-selective outputs obtained from the multi-chip system, a real-time texture segregation was conducted based on a computational model inspired by psychophysics and neurophysiology. The texture image was filtered by the two orthogonally oriented receptive fields of the multi-chip system and the filtered images were combined to segregate the area of different texture orientation with the aid of FPGA. The present system is also useful for the investigation of the functions of the higher-order cells that can be obtained by combining the simple and complex cells.
Blood pressure measurement and display system
NASA Technical Reports Server (NTRS)
Farkas, A. J.
1972-01-01
System is described that employs solid state circuitry to transmit visual display of patient's blood pressure. Response of sphygmomanometer cuff and microphone provide input signals. Signals and their amplitudes, from turn-on time to turn-off time, are continuously fed to data transmitter which transmits to display device.
The touchscreen operant platform for assessing executive function in rats and mice
Mar, Adam C.; Horner, Alexa E.; Nilsson, Simon R.O.; Alsiö, Johan; Kent, Brianne A.; Kim, Chi Hun; Holmes, Andrew; Saksida, Lisa M.; Bussey, Timothy J.
2014-01-01
Summary This protocol details a subset of assays developed within the touchscreen platform to measure aspects of executive function in rodents. Three main procedures are included: Extinction, measuring the rate and extent of curtailing a response that was previously, but is no longer, associated with reward; Reversal Learning, measuring the rate and extent of switching a response toward a visual stimulus that was previously not, but has become, associated with reward (and away from a visual stimulus that was previously, but is no longer, rewarded); and the 5-Choice Serial Reaction Time (5-CSRT) task, gauging the ability to selectively detect and appropriately respond to briefly presented, spatially unpredictable visual stimuli. These methods were designed to assess both complimentary and overlapping constructs including selective and divided visual attention, inhibitory control, flexibility, impulsivity and compulsivity. The procedures comprise part of a wider touchscreen test battery assessing cognition in rodents with high potential for translation to human studies. PMID:24051960
Perceptual learning modifies untrained pursuit eye movements.
Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa
2014-07-07
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.
Perceptual learning modifies untrained pursuit eye movements
Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa
2014-01-01
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412
Kramer, Edgar R.
2015-01-01
Background & Aims The brain dopaminergic (DA) system is involved in fine tuning many behaviors and several human diseases are associated with pathological alterations of the DA system such as Parkinson’s disease (PD) and drug addiction. Because of its complex network integration, detailed analyses of physiological and pathophysiological conditions are only possible in a whole organism with a sophisticated tool box for visualization and functional modification. Methods & Results Here, we have generated transgenic mice expressing the tetracycline-regulated transactivator (tTA) or the reverse tetracycline-regulated transactivator (rtTA) under control of the tyrosine hydroxylase (TH) promoter, TH-tTA (tet-OFF) and TH-rtTA (tet-ON) mice, to visualize and genetically modify DA neurons. We show their tight regulation and efficient use to overexpress proteins under the control of tet-responsive elements or to delete genes of interest with tet-responsive Cre. In combination with mice encoding tet-responsive luciferase, we visualized the DA system in living mice progressively over time. Conclusion These experiments establish TH-tTA and TH-rtTA mice as a powerful tool to generate and monitor mouse models for DA system diseases. PMID:26291828
Fast Coding of Orientation in Primary Visual Cortex
Shriki, Oren; Kohn, Adam; Shamir, Maoz
2012-01-01
Understanding how populations of neurons encode sensory information is a major goal of systems neuroscience. Attempts to answer this question have focused on responses measured over several hundred milliseconds, a duration much longer than that frequently used by animals to make decisions about the environment. How reliably sensory information is encoded on briefer time scales, and how best to extract this information, is unknown. Although it has been proposed that neuronal response latency provides a major cue for fast decisions in the visual system, this hypothesis has not been tested systematically and in a quantitative manner. Here we use a simple ‘race to threshold’ readout mechanism to quantify the information content of spike time latency of primary visual (V1) cortical cells to stimulus orientation. We find that many V1 cells show pronounced tuning of their spike latency to stimulus orientation and that almost as much information can be extracted from spike latencies as from firing rates measured over much longer durations. To extract this information, stimulus onset must be estimated accurately. We show that the responses of cells with weak tuning of spike latency can provide a reliable onset detector. We find that spike latency information can be pooled from a large neuronal population, provided that the decision threshold is scaled linearly with the population size, yielding a processing time of the order of a few tens of milliseconds. Our results provide a novel mechanism for extracting information from neuronal populations over the very brief time scales in which behavioral judgments must sometimes be made. PMID:22719237
Keil, Andreas; Moratti, Stephan; Sabatinelli, Dean; Bradley, Margaret M; Lang, Peter J
2005-08-01
Affectively arousing visual stimuli have been suggested to automatically attract attentional resources in order to optimize sensory processing. The present study crosses the factors of spatial selective attention and affective content, and examines the relationship between instructed (spatial) and automatic attention to affective stimuli. In addition to response times and error rate, electroencephalographic data from 129 electrodes were recorded during a covert spatial attention task. This task required silent counting of random-dot targets embedded in a 10 Hz flicker of colored pictures presented to both hemifields. Steady-state visual evoked potentials (ssVEPs) were obtained to determine amplitude and phase of electrocortical responses to pictures. An increase of ssVEP amplitude was observed as an additive function of spatial attention and emotional content. Statistical parametric mapping of this effect indicated occipito-temporal and parietal cortex activation contralateral to the attended visual hemifield in ssVEP amplitude modulation. This difference was most pronounced during selection of the left visual hemifield, at right temporal electrodes. In line with this finding, phase information revealed accelerated processing of aversive arousing, compared to affectively neutral pictures. The data suggest that affective stimulus properties modulate the spatiotemporal process along the ventral stream, encompassing amplitude amplification and timing changes of posterior and temporal cortex.
Auditory and visual orienting responses in listeners with and without hearing-impairment
Brimijoin, W. Owen; McShefferty, David; Akeroyd, Michael A.
2015-01-01
Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0° loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees per second in both groups, corresponding to a rate of change of approximately 1 microsecond of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly-changing velocities, directional reversals, and frequent fixation angle corrections. PMID:20550266
A comparison of visual inspection time measures in children with cerebral palsy.
Kaufman, Jacqueline N; Donders, Jacobus; Warschausky, Seth
2014-05-01
This study examined the performance of children with and without cerebral palsy on two inspection time (IT) tests, as accessible nonspeeded response measures of cognitive processing speed. Participants, ages 8 to 16, included 66 children with congenital CP and 119 typically developing peers. Measures were two visual IT tasks with identical target stimuli but differential response strategies either via a traditional dual-key method or with an assistive technology pressure switch interface and response option scanning. The CP group had slower IT than the control group independent of test version. Log transformations were used to address skew, and transformed mean intraclass correlations showed moderate agreement between test versions for both participant groups. Bland-Altman plots showed that at higher mean IT thresholds, greater discrepancies between test version scores were observed. Findings support the feasibility of developing tests that reduce speeded motor response demands. Future test development should incorporate increased gradations of difficulty at the extremes of neuropsychological functioning to more accurately assess the performance of individuals whose conditions are associated with atypical performance levels. (c) 2014 APA, all rights reserved.
New developments in supra-threshold perimetry.
Henson, David B; Artes, Paul H
2002-09-01
To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.
Crown-of-thorns starfish have true image forming vision.
Petie, Ronald; Garm, Anders; Hall, Michael R
2016-01-01
Photoreceptors have evolved numerous times giving organisms the ability to detect light and respond to specific visual stimuli. Studies into the visual abilities of the Asteroidea (Echinodermata) have recently shown that species within this class have a more developed visual sense than previously thought and it has been demonstrated that starfish use visual information for orientation within their habitat. Whereas image forming eyes have been suggested for starfish, direct experimental proof of true spatial vision has not yet been obtained. The behavioural response of the coral reef inhabiting crown-of-thorns starfish (Acanthaster planci) was tested in controlled aquarium experiments using an array of stimuli to examine their visual performance. We presented starfish with various black-and-white shapes against a mid-intensity grey background, designed such that the animals would need to possess true spatial vision to detect these shapes. Starfish responded to black-and-white rectangles, but no directional response was found to black-and-white circles, despite equal areas of black and white. Additionally, we confirmed that starfish were attracted to black circles on a white background when the visual angle is larger than 14°. When changing the grey tone of the largest circle from black to white, we found responses to contrasts of 0.5 and up. The starfish were attracted to the dark area's of the visual stimuli and were found to be both attracted and repelled by the visual targets. For crown-of-thorns starfish, visual cues are essential for close range orientation towards objects, such as coral boulders, in the wild. These visually guided behaviours can be replicated in aquarium conditions. Our observation that crown-of-thorns starfish respond to black-and-white shapes on a mid-intensity grey background is the first direct proof of true spatial vision in starfish and in the phylum Echinodermata.
Monzalvo, Karla; Dehaene, Stanislas
2018-01-01
How does education affect cortical organization? All literate adults possess a region specialized for letter strings, the visual word form area (VWFA), within the mosaic of ventral regions involved in processing other visual categories such as objects, places, faces, or body parts. Therefore, the acquisition of literacy may induce a reorientation of cortical maps towards letters at the expense of other categories such as faces. To test this cortical recycling hypothesis, we studied how the visual cortex of individual children changes during the first months of reading acquisition. Ten 6-year-old children were scanned longitudinally 6 or 7 times with functional magnetic resonance imaging (fMRI) before and throughout the first year of school. Subjects were exposed to a variety of pictures (words, numbers, tools, houses, faces, and bodies) while performing an unrelated target-detection task. Behavioral assessment indicated a sharp rise in grapheme–phoneme knowledge and reading speed in the first trimester of school. Concurrently, voxels specific to written words and digits emerged at the VWFA location. The responses to other categories remained largely stable, although right-hemispheric face-related activity increased in proportion to reading scores. Retrospective examination of the VWFA voxels prior to reading acquisition showed that reading encroaches on voxels that are initially weakly specialized for tools and close to but distinct from those responsive to faces. Remarkably, those voxels appear to keep their initial category selectivity while acquiring an additional and stronger responsivity to words. We propose a revised model of the neuronal recycling process in which new visual categories invade weakly specified cortex while leaving previously stabilized cortical responses unchanged. PMID:29509766
Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah
2014-11-19
The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.
Dehaene-Lambertz, Ghislaine; Monzalvo, Karla; Dehaene, Stanislas
2018-03-01
How does education affect cortical organization? All literate adults possess a region specialized for letter strings, the visual word form area (VWFA), within the mosaic of ventral regions involved in processing other visual categories such as objects, places, faces, or body parts. Therefore, the acquisition of literacy may induce a reorientation of cortical maps towards letters at the expense of other categories such as faces. To test this cortical recycling hypothesis, we studied how the visual cortex of individual children changes during the first months of reading acquisition. Ten 6-year-old children were scanned longitudinally 6 or 7 times with functional magnetic resonance imaging (fMRI) before and throughout the first year of school. Subjects were exposed to a variety of pictures (words, numbers, tools, houses, faces, and bodies) while performing an unrelated target-detection task. Behavioral assessment indicated a sharp rise in grapheme-phoneme knowledge and reading speed in the first trimester of school. Concurrently, voxels specific to written words and digits emerged at the VWFA location. The responses to other categories remained largely stable, although right-hemispheric face-related activity increased in proportion to reading scores. Retrospective examination of the VWFA voxels prior to reading acquisition showed that reading encroaches on voxels that are initially weakly specialized for tools and close to but distinct from those responsive to faces. Remarkably, those voxels appear to keep their initial category selectivity while acquiring an additional and stronger responsivity to words. We propose a revised model of the neuronal recycling process in which new visual categories invade weakly specified cortex while leaving previously stabilized cortical responses unchanged.
Evidence for auditory-visual processing specific to biological motion.
Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F
2012-01-01
Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
Su, Kyung-Min; Hairston, W David; Robbins, Kay
2018-01-01
In controlled laboratory EEG experiments, researchers carefully mark events and analyze subject responses time-locked to these events. Unfortunately, such markers may not be available or may come with poor timing resolution for experiments conducted in less-controlled naturalistic environments. We present an integrated event-identification method for identifying particular responses that occur in unlabeled continuously recorded EEG signals based on information from recordings of other subjects potentially performing related tasks. We introduce the idea of timing slack and timing-tolerant performance measures to deal with jitter inherent in such non-time-locked systems. We have developed an implementation available as an open-source MATLAB toolbox (http://github.com/VisLab/EEG-Annotate) and have made test data available in a separate data note. We applied the method to identify visual presentation events (both target and non-target) in data from an unlabeled subject using labeled data from other subjects with good sensitivity and specificity. The method also identified actual visual presentation events in the data that were not previously marked in the experiment. Although the method uses traditional classifiers for initial stages, the problem of identifying events based on the presence of stereotypical EEG responses is the converse of the traditional stimulus-response paradigm and has not been addressed in its current form. In addition to identifying potential events in unlabeled or incompletely labeled EEG, these methods also allow researchers to investigate whether particular stereotypical neural responses are present in other circumstances. Timing-tolerance has the added benefit of accommodating inter- and intra- subject timing variations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Enhanced visual processing contributes to matrix reasoning in autism
Soulières, Isabelle; Dawson, Michelle; Samson, Fabienne; Barbeau, Elise B.; Sahyoun, Cherif; Strangman, Gary E.; Zeffiro, Thomas A.; Mottron, Laurent
2009-01-01
Recent behavioral investigations have revealed that autistics perform more proficiently on Raven's Standard Progressive Matrices (RSPM) than would be predicted by their Wechsler intelligence scores. A widely-used test of fluid reasoning and intelligence, the RSPM assays abilities to flexibly infer rules, manage goal hierarchies, and perform high-level abstractions. The neural substrates for these abilities are known to encompass a large frontoparietal network, with different processing models placing variable emphasis on the specific roles of the prefrontal or posterior regions. We used functional magnetic resonance imaging to explore the neural bases of autistics' RSPM problem solving. Fifteen autistic and eighteen non-autistic participants, matched on age, sex, manual preference and Wechsler IQ, completed 60 self-paced randomly-ordered RSPM items along with a visually similar 60-item pattern matching comparison task. Accuracy and response times did not differ between groups in the pattern matching task. In the RSPM task, autistics performed with similar accuracy, but with shorter response times, compared to their non-autistic controls. In both the entire sample and a subsample of participants additionally matched on RSPM performance to control for potential response time confounds, neural activity was similar in both groups for the pattern matching task. However, for the RSPM task, autistics displayed relatively increased task-related activity in extrastriate areas (BA18), and decreased activity in the lateral prefrontal cortex (BA9) and the medial posterior parietal cortex (BA7). Visual processing mechanisms may therefore play a more prominent role in reasoning in autistics. PMID:19530215
Pitts, Brandon J; Sarter, Nadine
2018-06-01
Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.
Mender, Bedeho M W; Stringer, Simon M
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.
Mender, Bedeho M. W.; Stringer, Simon M.
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301
Training to Facilitate Adaptation to Novel Sensory Environments
NASA Technical Reports Server (NTRS)
Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Brady, R. A.; Batson, C. D.; Ploutz-Snyder, R. J.; Cohen, H. S.
2010-01-01
After spaceflight, the process of readapting to Earth s gravity causes locomotor dysfunction. We are developing a gait training countermeasure to facilitate adaptive responses in locomotor function. Our training system is comprised of a treadmill placed on a motion-base facing a virtual visual scene that provides an unstable walking surface combined with incongruent visual flow designed to train subjects to rapidly adapt their gait patterns to changes in the sensory environment. The goal of our present study was to determine if training improved both the locomotor and dual-tasking ability responses to a novel sensory environment and to quantify the retention of training. Subjects completed three, 30-minute training sessions during which they walked on the treadmill while receiving discordant support surface and visual input. Control subjects walked on the treadmill without any support surface or visual alterations. To determine the efficacy of training, all subjects were then tested using a novel visual flow and support surface movement not previously experienced during training. This test was performed 20 minutes, 1 week, and 1, 3, and 6 months after the final training session. Stride frequency and auditory reaction time were collected as measures of postural stability and cognitive effort, respectively. Subjects who received training showed less alteration in stride frequency and auditory reaction time compared to controls. Trained subjects maintained their level of performance over 6 months. We conclude that, with training, individuals became more proficient at walking in novel discordant sensorimotor conditions and were able to devote more attention to competing tasks.
Visual-servoing optical microscopy
Callahan, Daniel E.; Parvin, Bahram
2009-06-09
The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time: quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.
Visual-servoing optical microscopy
Callahan, Daniel E [Martinez, CA; Parvin, Bahram [Mill Valley, CA
2011-05-24
The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time; quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.
Visual-servoing optical microscopy
Callahan, Daniel E; Parvin, Bahram
2013-10-01
The present invention provides methods and devices for the knowledge-based discovery and optimization of differences between cell types. In particular, the present invention provides visual servoing optical microscopy, as well as analysis methods. The present invention provides means for the close monitoring of hundreds of individual, living cells over time; quantification of dynamic physiological responses in multiple channels; real-time digital image segmentation and analysis; intelligent, repetitive computer-applied cell stress and cell stimulation; and the ability to return to the same field of cells for long-term studies and observation. The present invention further provides means to optimize culture conditions for specific subpopulations of cells.
Object based implicit contextual learning: a study of eye movements.
van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel
2011-02-01
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.
Henderson, John M; Chanceaux, Myriam; Smith, Tim J
2009-01-23
We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.
Visual skills in airport-security screening.
McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D; Vidoni, Eric D; Boot, Walter R
2004-05-01
An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.
Cumulative latency advance underlies fast visual processing in desynchronized brain state
Wang, Xu-dong; Chen, Cheng; Zhang, Dinghong; Yao, Haishan
2014-01-01
Fast sensory processing is vital for the animal to efficiently respond to the changing environment. This is usually achieved when the animal is vigilant, as reflected by cortical desynchronization. However, the neural substrate for such fast processing remains unclear. Here, we report that neurons in rat primary visual cortex (V1) exhibited shorter response latency in the desynchronized state than in the synchronized state. In vivo whole-cell recording from the same V1 neurons undergoing the two states showed that both the resting and visually evoked conductances were higher in the desynchronized state. Such conductance increases of single V1 neurons shorten the response latency by elevating the membrane potential closer to the firing threshold and reducing the membrane time constant, but the effects only account for a small fraction of the observed latency advance. Simultaneous recordings in lateral geniculate nucleus (LGN) and V1 revealed that LGN neurons also exhibited latency advance, with a degree smaller than that of V1 neurons. Furthermore, latency advance in V1 increased across successive cortical layers. Thus, latency advance accumulates along various stages of the visual pathway, likely due to a global increase of membrane conductance in the desynchronized state. This cumulative effect may lead to a dramatic shortening of response latency for neurons in higher visual cortex and play a critical role in fast processing for vigilant animals. PMID:24347634
Cumulative latency advance underlies fast visual processing in desynchronized brain state.
Wang, Xu-dong; Chen, Cheng; Zhang, Dinghong; Yao, Haishan
2014-01-07
Fast sensory processing is vital for the animal to efficiently respond to the changing environment. This is usually achieved when the animal is vigilant, as reflected by cortical desynchronization. However, the neural substrate for such fast processing remains unclear. Here, we report that neurons in rat primary visual cortex (V1) exhibited shorter response latency in the desynchronized state than in the synchronized state. In vivo whole-cell recording from the same V1 neurons undergoing the two states showed that both the resting and visually evoked conductances were higher in the desynchronized state. Such conductance increases of single V1 neurons shorten the response latency by elevating the membrane potential closer to the firing threshold and reducing the membrane time constant, but the effects only account for a small fraction of the observed latency advance. Simultaneous recordings in lateral geniculate nucleus (LGN) and V1 revealed that LGN neurons also exhibited latency advance, with a degree smaller than that of V1 neurons. Furthermore, latency advance in V1 increased across successive cortical layers. Thus, latency advance accumulates along various stages of the visual pathway, likely due to a global increase of membrane conductance in the desynchronized state. This cumulative effect may lead to a dramatic shortening of response latency for neurons in higher visual cortex and play a critical role in fast processing for vigilant animals.
Cerebellar contributions to motor timing: a PET study of auditory and visual rhythm reproduction.
Penhune, V B; Zattore, R J; Evans, A C
1998-11-01
The perception and production of temporal patterns, or rhythms, is important for both music and speech. However, the way in which the human brain achieves accurate timing of perceptual input and motor output is as yet little understood. Central control of both motor timing and perceptual timing across modalities has been linked to both the cerebellum and the basal ganglia (BG). The present study was designed to test the hypothesized central control of temporal processing and to examine the roles of the cerebellum, BG, and sensory association areas. In this positron emission tomography (PET) activation paradigm, subjects reproduced rhythms of increasing temporal complexity that were presented separately in the auditory and visual modalities. The results provide support for a supramodal contribution of the lateral cerebellar cortex and cerebellar vermis to the production of a timed motor response, particularly when it is complex and/or novel. The results also give partial support to the involvement of BG structures in motor timing, although this may be more directly related to implementation of the motor response than to timing per se. Finally, sensory association areas and the ventrolateral frontal cortex were found to be involved in modality-specific encoding and retrieval of the temporal stimuli. Taken together, these results point to the participation of a number of neural structures in the production of a timed motor response from an external stimulus. The role of the cerebellum in timing is conceptualized not as a clock or counter but simply as the structure that provides the necessary circuitry for the sensory system to extract temporal information and for the motor system to learn to produce a precisely timed response.
The Time Course of Segmentation and Cue-Selectivity in the Human Visual Cortex
Appelbaum, Lawrence G.; Ales, Justin M.; Norcia, Anthony M.
2012-01-01
Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ∼143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity. PMID:22479566
Wijnen, V J M; Eilander, H J; de Gelder, B; van Boxtel, G J M
2014-11-01
Auditory stimulation is often used to evoke responses in unresponsive patients who have suffered severe brain injury. In order to investigate visual responses, we examined visual evoked potentials (VEPs) and behavioral responses to visual stimuli in vegetative patients during recovery to consciousness. Behavioral responses to visual stimuli (visual localization, comprehension of written commands, and object manipulation) and flash VEPs were repeatedly examined in eleven vegetative patients every two weeks for an average period of 2.6months, and patients' VEPs were compared to a healthy control group. Long-term outcome of the patients was assessed 2-3years later. Visual response scores increased during recovery to consciousness for all scales: visual localization, comprehension of written commands, and object manipulation. VEP amplitudes were smaller, and latencies were longer in the patient group relative to the controls. VEPs characteristics at first measurement were related to long-term outcome up to three years after injury. Our findings show the improvement of visual responding with recovery from the vegetative state to consciousness. Elementary visual processing is present, yet according to VEP responses, poorer in vegetative and minimally conscious state than in healthy controls, and remains poorer when patients recovered to consciousness. However, initial VEPs are related to long-term outcome. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Ali, M A; Ahsan, Z; Amin, M; Latif, S; Ayyaz, A; Ayyaz, M N
2016-05-01
Globally, disease surveillance systems are playing a significant role in outbreak detection and response management of Infectious Diseases (IDs). However, in developing countries like Pakistan, epidemic outbreaks are difficult to detect due to scarcity of public health data and absence of automated surveillance systems. Our research is intended to formulate an integrated service-oriented visual analytics architecture for ID surveillance, identify key constituents and set up a baseline for easy reproducibility of such systems in the future. This research focuses on development of ID-Viewer, which is a visual analytics decision support system for ID surveillance. It is a blend of intelligent approaches to make use of real-time streaming data from Emergency Departments (EDs) for early outbreak detection, health care resource allocation and epidemic response management. We have developed a robust service-oriented visual analytics architecture for ID surveillance, which provides automated mechanisms for ID data acquisition, outbreak detection and epidemic response management. Classification of chief-complaints is accomplished using dynamic classification module, which employs neural networks and fuzzy-logic to categorize syndromes. Standard routines by Center for Disease Control (CDC), i.e. c1-c3 (c1-mild, c2-medium and c3-ultra), and spatial scan statistics are employed for detection of temporal and spatio-temporal disease outbreaks respectively. Prediction of imminent disease threats is accomplished using support vector regression for early warnings and response planning. Geographical visual analytics displays are developed that allow interactive visualization of syndromic clusters, monitoring disease spread patterns, and identification of spatio-temporal risk zones. We analysed performance of surveillance framework using ID data for year 2011-2015. Dynamic syndromic classifier is able to classify chief-complaints to appropriate syndromes with high classification accuracy. Outbreak detection methods are able to detect the ID outbreaks in start of epidemic time zones. Prediction model is able to forecast dengue trend for 20 weeks ahead with nominal normalized root mean square error of 0.29. Interactive geo-spatiotemporal displays, i.e. heat-maps, and choropleth are shown in respective sections. The proposed framework will set a standard and provide necessary details for future implementation of such a system for resource-constrained regions. It will improve early outbreak detection attributable to natural and man-made biological threats, monitor spatio-temporal epidemic trends and provide assurance that an outbreak has, or has not occurred. Advanced analytics features will be beneficial in timely organization/formulation of health management policies, disease control activities and efficient health care resource allocation. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Chronic multiunit recordings in behaving animals: advantages and limitations.
Supèr, Hans; Roelfsema, Pieter R
2005-01-01
By simultaneous recording from neural responses at many different loci at the same time, we can understand the interaction between neurons, and thereby gain insight into the network properties of neural processing, instead of the functioning of individual neurons. Here we will discuss a method for recording in behaving animals that uses chronically implanted micro-electrodes that allow one to track neural responses over a long period of time. In a majority of cases, multiunit activity, which is the aggregate spiking activity of a number of neurons in the vicinity of an electrode tip, is recorded through these electrodes, and occasionally single neurons can be isolated. Here we compare the properties of multiunit responses to the responses of single neurons in the primary visual cortex. We also discuss the advantages and disadvantages of the multiunit signal as opposed to a signal of single neurons. We demonstrate that multiunit recording provides a reliable and useful technique in cases where the neurons at the electrodes have similar response properties. Multiunit recording is therefore especially valuable when task variables have an effect that is consistent across the population of neurons. In the primary visual cortex, this is the case for figure-ground segregation and visual attention. Multiunit recording also has clear advantages for cross-correlation analysis. We show that the cross-correlation function between multiunit signals gives a reliable estimate of the average single-unit cross-correlation function. By the use of multiunit recording, it becomes much easier to detect relatively weak interactions between neurons at different cortical locations.
Re-examining overlap between tactile and visual motion responses within hMT+ and STS
Jiang, Fang; Beauchamp, Michael S.; Fine, Ione
2015-01-01
Here we examine overlap between tactile and visual motion BOLD responses within the human MT+ complex. Although several studies have reported tactile responses overlapping with hMT+, many used group average analyses, leaving it unclear whether these responses were restricted to sub-regions of hMT+. Moreover, previous studies either employed a tactile task or passive stimulation, leaving it unclear whether or not tactile responses in hMT+ are simply the consequence of visual imagery. Here we carried out a replication of one of the classic papers finding tactile responses in hMT+ (Hagen et al. 2002). We mapped MT and MST in individual subjects using visual field localizers. We then examined responses to tactile motion on the arm, either presented passively or in the presence of a visual task performed at fixation designed to minimize visualization of the concurrent tactile stimulation. To our surprise, without a visual task, we found only weak tactile motion responses in MT (6% of voxels showing tactile responses) and MST (2% of voxels). With an unrelated visual task designed to withdraw attention from the tactile modality, responses in MST reduced to almost nothing (<1% regions). Consistent with previous results, we did observe tactile responses in STS regions superior and anterior to hMT+. Despite the lack of individual overlap, group averaged responses produced strong spurious overlap between tactile and visual motion responses within hMT+ that resembled those observed in previous studies. The weak nature of tactile responses in hMT+ (and their abolition by withdrawal of attention) suggests that hMT+ may not serve as a supramodal motion processing module. PMID:26123373
Poeppl, Timm B; Nitschke, Joachim; Dombert, Beate; Santtila, Pekka; Greenlee, Mark W; Osterheider, Michael; Mokros, Andreas
2011-06-01
Pedophiles show sexual interest in prepubescent children but not in adults. Research into the neurofunctional mechanisms of paraphilias has gathered momentum over the last years. To elucidate the underlying neural processing of sexual interest among pedophiles and to highlight the differences in comparison with nonparaphilic sexual interest in adults. Nine pedophilic patients and 11 nonpedophilic control subjects underwent functional magnetic resonance imaging (fMRI) while viewing pictures of nude (prepubescents, pubescents, and adults) and neutral content, as well as performing a concomitant choice reaction time task (CRTT). Brain blood oxygen level-dependent (BOLD) signals and response latencies in the CRTT during exposure to each picture category. Analysis of behavioral data showed group differences in reaction times regarding prepubescent and adult but not pubescent stimuli. During stimulation with pictures displaying nude prepubescents, pedophiles showed increased BOLD response in brain areas known to be involved in processing of visual sexual stimuli. Comparison of pedophilic patients with the control group discovered differences in BOLD responses with respect to prepubescent and adult but not to pubescent stimuli. Differential effects in particular occurred in the cingulate gyrus and insular region. The brain response of pedophiles to visual sexual stimulation by images of nude prepubescents is comparable with previously described neural patterns of sexual processing in nonpedophilic human males evoked by visual stimuli depicting nude adults. Nevertheless, group differences found in the cingulate gyrus and the insular region suggest an important role of these brain areas in pedophilic sexual interest. Furthermore, combining attention-based methods like CRTT with fMRI may be a viable option for future diagnostic procedures regarding pedophilia. © 2011 International Society for Sexual Medicine.
Divergent receiver responses to components of multimodal signals in two foot-flagging frog species.
Preininger, Doris; Boeckle, Markus; Sztatecsny, Marc; Hödl, Walter
2013-01-01
Multimodal communication of acoustic and visual signals serves a vital role in the mating system of anuran amphibians. To understand signal evolution and function in multimodal signal design it is critical to test receiver responses to unimodal signal components versus multimodal composite signals. We investigated two anuran species displaying a conspicuous foot-flagging behavior in addition to or in combination with advertisement calls while announcing their signaling sites to conspecifics. To investigate the conspicuousness of the foot-flagging signals, we measured and compared spectral reflectance of foot webbings of Micrixalus saxicola and Staurois parvus using a spectrophotometer. We performed behavioral field experiments using a model frog including an extendable leg combined with acoustic playbacks to test receiver responses to acoustic, visual and combined audio-visual stimuli. Our results indicated that the foot webbings of S. parvus achieved a 13 times higher contrast against their visual background than feet of M. saxicola. The main response to all experimental stimuli in S. parvus was foot flagging, whereas M. saxicola responded primarily with calls but never foot flagged. Together these across-species differences suggest that in S. parvus foot-flagging behavior is applied as a salient and frequently used communicative signal during agonistic behavior, whereas we propose it constitutes an evolutionary nascent state in ritualization of the current fighting behavior in M. saxicola.
Changes in search rate but not in the dynamics of exogenous attention in action videogame players.
Hubert-Wallander, Bjorn; Green, C Shawn; Sugarman, Michael; Bavelier, Daphne
2011-11-01
Many previous studies have shown that the speed of processing in attentionally demanding tasks seems enhanced following habitual action videogame play. However, using one of the diagnostic tasks for efficiency of attentional processing, a visual search task, Castel and collaborators (Castel, Pratt, & Drummond, Acta Psychologica 119:217-230, 2005) reported no difference in visual search rates, instead proposing that action gaming may change response execution time rather than the efficiency of visual selective attention per se. Here we used two hard visual search tasks, one measuring reaction time and the other accuracy, to test whether visual search rate may be changed by action videogame play. We found greater search rates in the gamer group than in the nongamer controls, consistent with increased efficiency in visual selective attention. We then asked how general the change in attentional throughput noted so far in gamers might be by testing whether exogenous attentional cues would lead to a disproportional enhancement in throughput in gamers as compared to nongamers. Interestingly, exogenous cues were found to enhance throughput equivalently between gamers and nongamers, suggesting that not all mechanisms known to enhance throughput are similarly enhanced in action videogamers.
Fantinati, Anna; Ossato, Andrea; Bianco, Sara; Canazza, Isabella; De Giorgio, Fabio; Trapella, Claudio; Marti, Matteo
2017-05-01
Among novel psychoactive substances notified to EMCDDA and Europol were 1-cyclohexyl-x-methoxybenzene stereoisomers (ortho, meta, and para). These substances share some structural characteristics with phencyclidine and tramadol. Nowadays, no information on the pharmacological and toxicological effects evoked by 1-cyclohexyl-x-methoxybenzene are reported. The aim of this study was to investigate the effect evoked by each one stereoisomer on visual stimulation, body temperature, acute thermal pain, and motor activity in mice. Mice were evaluated in behavioral tests carried out in a consecutive manner according to the following time scheme: observation of visual placing response, measures of core body temperature, determination of acute thermal pain, and stimulated motor activity. All three stereoisomers dose-dependent inhibit visual placing response (rank order: meta > ortho > para), induce hyperthermia at lower and hypothermia at higher doses (meta > ortho > para) and cause analgesia to thermal stimuli (para > meta = ortho), while they do not alter motor activity. For the first time, this study demonstrates that systemic administration of 1-cyclohexyl-x-methoxybenzene compounds markedly inhibit visual response, promote analgesia, and induce core temperature alterations in mice. This data, although obtained in animal model, suggest their possible hazard for human health (i.e., hyperthermia and sensorimotor alterations). In particular, these novel psychoactive substances may have a negative impact in many daily activities, greatly increasing the risk factors for workplace accidents and traffic injuries. Copyright © 2017 John Wiley & Sons, Ltd.
Self-organization of head-centered visual responses under ecological training conditions.
Mender, Bedeho M W; Stringer, Simon M
2014-01-01
We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Hanken, Taylor; Young, Sam; Smilowitz, Karen; Chiampas, George; Waskowski, David
2016-10-01
As one of the largest marathons worldwide, the Bank of America Chicago Marathon (BACCM; Chicago, Illinois USA) accumulates high volumes of data. Race organizers and engaged agencies need the ability to access specific data in real-time. This report details a data visualization system designed for the Chicago Marathon and establishes key principles for event management data visualization. The data visualization system allows for efficient data communication among the organizing agencies of Chicago endurance events. Agencies can observe the progress of the race throughout the day and obtain needed information, such as the number and location of runners on the course and current weather conditions. Implementation of the system can reduce time-consuming, face-to-face interactions between involved agencies by having key data streams in one location, streamlining communications with the purpose of improving race logistics, as well as medical preparedness and response. Hanken T , Young S , Smilowitz K , Chiampas G , Waskowski D . Developing a data visualization system for the Bank of America Chicago Marathon (Chicago, Illinois USA). Prehosp Disaster Med. 2016;31(5):572-577.
Network model of top-down influences on local gain and contextual interactions in visual cortex.
Piëch, Valentin; Li, Wu; Reeke, George N; Gilbert, Charles D
2013-10-22
The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.
Concreteness in Word Processing: ERP and Behavioral Effects in a Lexical Decision Task
ERIC Educational Resources Information Center
Barber, Horacio A.; Otten, Leun J.; Kousta, Stavroula-Thaleia; Vigliocco, Gabriella
2013-01-01
Relative to abstract words, concrete words typically elicit faster response times and larger N400 and N700 event-related potential (ERP) brain responses. These effects have been interpreted as reflecting the denser links to associated semantic information of concrete words and their recruitment of visual imagery processes. Here, we examined…
ERIC Educational Resources Information Center
Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec
2004-01-01
Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkanen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick,…
Elevated audiovisual temporal interaction in patients with migraine without aura
2014-01-01
Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903
Figure-ground activity in V1 and guidance of saccadic eye movements.
Supèr, Hans
2006-01-01
Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
Indovina, Iole; Maffei, Vincenzo; Lacquaniti, Francesco
2013-09-01
By simulating self-motion on a virtual rollercoaster, we investigated whether acceleration cued by the optic flow affected the estimate of time-to-passage (TTP) to a target. In particular, we studied the role of a visual acceleration (1 g = 9.8 m/s(2)) simulating the effects of gravity in the scene, by manipulating motion law (accelerated or decelerated at 1 g, constant speed) and motion orientation (vertical, horizontal). Thus, 1-g-accelerated motion in the downward direction or decelerated motion in the upward direction was congruent with the effects of visual gravity. We found that acceleration (positive or negative) is taken into account but is overestimated in module in the calculation of TTP, independently of orientation. In addition, participants signaled TTP earlier when the rollercoaster accelerated downward at 1 g (as during free fall), with respect to when the same acceleration occurred along the horizontal orientation. This time shift indicates an influence of the orientation relative to visual gravity on response timing that could be attributed to the anticipation of the effects of visual gravity on self-motion along the vertical, but not the horizontal orientation. Finally, precision in TTP estimates was higher during vertical fall than when traveling at constant speed along the vertical orientation, consistent with a higher noise in TTP estimates when the motion violates gravity constraints.
40 CFR 53.32 - Test procedures for methods for SO2, CO, O3, and NO2.
Code of Federal Regulations, 2010 CFR
2010-07-01
... shall have a chart width of at least 25 centimeters, a response time of 1 second or less, a deadband of... appropriate time intervals such that trend plots similar to a strip chart recording may be constructed with a... facilitate visual evaluation of data submitted. (3) Allow adequate warmup or stabilization time as indicated...
Kaptsov, V A; Sosunov, N N; Shishchenko, I I; Viktorov, V S; Tulushev, V N; Deynego, V N; Bukhareva, E A; Murashova, M A; Shishchenko, A A
2014-01-01
There was performed the experimental work on the study of the possibility of the application of LED lighting (LED light sources) in rail transport for traffic safety in related professions. Results of 4 series of studies involving 10 volunteers for the study and a comparative evaluation of the functional state of the visual analyzer, the general functional state and mental capacity under the performing the simulated operator activity in conditions of traditional light sources (incandescent, fluorescent lamp) and the new LED (LED lamp, LED panel) light sources have revealed changes in the negative direction. This was pronounced in a some decrease of functional stability to color discrimination between green and red cone signals, as well as an increase in response time in complex visual--motor response and significant reduction in readiness for emergency action of examinees.
Stimulus relevance modulates contrast adaptation in visual cortex
Keller, Andreas J; Houlton, Rachael; Kampa, Björn M; Lesica, Nicholas A; Mrsic-Flogel, Thomas D; Keller, Georg B; Helmchen, Fritjof
2017-01-01
A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex. DOI: http://dx.doi.org/10.7554/eLife.21589.001 PMID:28130922
Neural oscillatory deficits in schizophrenia predict behavioral and neurocognitive impairments
Martínez, Antígona; Gaspar, Pablo A.; Hillyard, Steven A.; Bickel, Stephan; Lakatos, Peter; Dias, Elisa C.; Javitt, Daniel C.
2015-01-01
Paying attention to visual stimuli is typically accompanied by event-related desynchronizations (ERD) of ongoing alpha (7–14 Hz) activity in visual cortex. The present study used time-frequency based analyses to investigate the role of impaired alpha ERD in visual processing deficits in schizophrenia (Sz). Subjects viewed sinusoidal gratings of high (HSF) and low (LSF) spatial frequency (SF) designed to test functioning of the parvo- vs. magnocellular pathways, respectively. Patients with Sz and healthy controls paid attention selectively to either the LSF or HSF gratings which were presented in random order. Event-related brain potentials (ERPs) were recorded to all stimuli. As in our previous study, it was found that Sz patients were selectively impaired at detecting LSF target stimuli and that ERP amplitudes to LSF stimuli were diminished, both for the early sensory-evoked components and for the attend minus unattend difference component (the Selection Negativity), which is generally regarded as a specific index of feature-selective attention. In the time-frequency domain, the differential ERP deficits to LSF stimuli were echoed in a virtually absent theta-band phase locked response to both unattended and attended LSF stimuli (along with relatively intact theta-band activity for HSF stimuli). In contrast to the theta-band evoked responses which were tightly stimulus locked, stimulus-induced desynchronizations of ongoing alpha activity were not tightly stimulus locked and were apparent only in induced power analyses. Sz patients were significantly impaired in the attention-related modulation of ongoing alpha activity for both HSF and LSF stimuli. These deficits correlated with patients’ behavioral deficits in visual information processing as well as with visually based neurocognitive deficits. These findings suggest an additional, pathway-independent, mechanism by which deficits in early visual processing contribute to overall cognitive impairment in Sz. PMID:26190988
Alerting Attention and Time Perception in Children.
ERIC Educational Resources Information Center
Droit-Volet, Sylvie
2003-01-01
Examined effects of a click signaling arrival of a visual stimulus to be timed on temporal discrimination in 3-, 5-, and 8-year-olds. Found that in all groups, the proportion of long responses increased with the stimulus duration, although the steepness of functions increased with age. Stimulus duration was judged longer with than without the…
Neuronal Organization of Deep Brain Opsin Photoreceptors in Adult Teleosts
Hang, Chong Yee; Kitahashi, Takashi; Parhar, Ishwar S.
2016-01-01
Biological impacts of light beyond vision, i.e., non-visual functions of light, signify the need to better understand light detection (or photoreception) systems in vertebrates. Photopigments, which comprise light-absorbing chromophores bound to a variety of G-protein coupled receptor opsins, are responsible for visual and non-visual photoreception. Non-visual opsin photopigments in the retina of mammals and extra-retinal tissues of non-mammals play an important role in non-image-forming functions of light, e.g., biological rhythms and seasonal reproduction. This review highlights the role of opsin photoreceptors in the deep brain, which could involve conserved neurochemical systems that control different time- and light-dependent physiologies in in non-mammalian vertebrates including teleost fish. PMID:27199680
Redfern, Mark S; Chambers, April J; Jennings, J Richard; Furman, Joseph M
2017-08-01
This study investigated the impact of attention on the sensory and motor actions during postural recovery from underfoot perturbations in young and older adults. A dual-task paradigm was used involving disjunctive and choice reaction time (RT) tasks to auditory and visual stimuli at different delays from the onset of two types of platform perturbations (rotations and translations). The RTs were increased prior to the perturbation (preparation phase) and during the immediate recovery response (response initiation) in young and older adults, but this interference dissipated rapidly after the perturbation response was initiated (<220 ms). The sensory modality of the RT task impacted the results with interference being greater for the auditory task compared to the visual task. As motor complexity of the RT task increased (disjunctive versus choice) there was greater interference from the perturbation. Finally, increasing the complexity of the postural perturbation by mixing the rotational and translational perturbations together increased interference for the auditory RT tasks, but did not affect the visual RT responses. These results suggest that sensory and motoric components of postural control are under the influence of different dynamic attentional processes.
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
Distinct roles of the cortical layers of area V1 in figure-ground segregation.
Self, Matthew W; van Kerkoerle, Timo; Supèr, Hans; Roelfsema, Pieter R
2013-11-04
What roles do the different cortical layers play in visual processing? We recorded simultaneously from all layers of the primary visual cortex while monkeys performed a figure-ground segregation task. This task can be divided into different subprocesses that are thought to engage feedforward, horizontal, and feedback processes at different time points. These different connection types have different patterns of laminar terminations in V1 and can therefore be distinguished with laminar recordings. We found that the visual response started 40 ms after stimulus presentation in layers 4 and 6, which are targets of feedforward connections from the lateral geniculate nucleus and distribute activity to the other layers. Boundary detection started shortly after the visual response. In this phase, boundaries of the figure induced synaptic currents and stronger neuronal responses in upper layer 4 and the superficial layers ~70 ms after stimulus onset, consistent with the hypothesis that they are detected by horizontal connections. In the next phase, ~30 ms later, synaptic inputs arrived in layers 1, 2, and 5 that receive feedback from higher visual areas, which caused the filling in of the representation of the entire figure with enhanced neuronal activity. The present results reveal unique contributions of the different cortical layers to the formation of a visual percept. This new blueprint of laminar processing may generalize to other tasks and to other areas of the cerebral cortex, where the layers are likely to have roles similar to those in area V1. Copyright © 2013 Elsevier Ltd. All rights reserved.
Inhibition of voluntary saccadic eye movement commands by abrupt visual onsets.
Edelman, Jay A; Xu, Kitty Z
2009-03-01
Saccadic eye movements are made both to explore the visual world and to react to sudden sensory events. We studied the ability for humans to execute a voluntary (i.e., nonstimulus-driven) saccade command in the face of a suddenly appearing visual stimulus. Subjects were required to make a saccade to a memorized location when a central fixation point disappeared. At varying times relative to fixation point disappearance a visual distractor appeared at a random location. When the distractor appeared at locations distant from the target virtually no saccades were initiated in a 30- to 40-ms interval beginning 70-80 ms after appearance of the distractor. If the distractor was presented slightly earlier relative to saccade initiation then saccades tended to have smaller amplitudes, with velocity profiles suggesting that the distractor terminated them prematurely. In contrast, distractors appearing close to the saccade target elicited express saccade-like movements 70-100 ms after their appearance, although the saccade endpoint was generally scarcely affected by the distractor. An additional experiment showed that these effects were weaker when the saccade was made to a visible target in a delayed task and still weaker when the saccade itself was made in response to the abrupt appearance of a visual stimulus. A final experiment revealed that the effect is smaller, but quite evident, for very small stimuli. These results suggest that the transient component of a visual response can briefly but almost completely suppress a voluntary saccade command, but only when the stimulus evoking that response is distant from the saccade goal.
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Temporally evolving gain mechanisms of attention in macaque area V4.
Sani, Ilaria; Santandrea, Elisa; Morrone, Maria Concetta; Chelazzi, Leonardo
2017-08-01
Cognitive attention and perceptual saliency jointly govern our interaction with the environment. Yet, we still lack a universally accepted account of the interplay between attention and luminance contrast, a fundamental dimension of saliency. We measured the attentional modulation of V4 neurons' contrast response functions (CRFs) in awake, behaving macaque monkeys and applied a new approach that emphasizes the temporal dynamics of cell responses. We found that attention modulates CRFs via different gain mechanisms during subsequent epochs of visually driven activity: an early contrast-gain, strongly dependent on prestimulus activity changes (baseline shift); a time-limited stimulus-dependent multiplicative modulation, reaching its maximal expression around 150 ms after stimulus onset; and a late resurgence of contrast-gain modulation. Attention produced comparable time-dependent attentional gain changes on cells heterogeneously coding contrast, supporting the notion that the same circuits mediate attention mechanisms in V4 regardless of the form of contrast selectivity expressed by the given neuron. Surprisingly, attention was also sometimes capable of inducing radical transformations in the shape of CRFs. These findings offer important insights into the mechanisms that underlie contrast coding and attention in primate visual cortex and a new perspective on their interplay, one in which time becomes a fundamental factor. NEW & NOTEWORTHY We offer an innovative perspective on the interplay between attention and luminance contrast in macaque area V4, one in which time becomes a fundamental factor. We place emphasis on the temporal dynamics of attentional effects, pioneering the notion that attention modulates contrast response functions of V4 neurons via the sequential engagement of distinct gain mechanisms. These findings advance understanding of attentional influences on visual processing and help reconcile divergent results in the literature. Copyright © 2017 the American Physiological Society.
Barbosa Porcellis da Silva, Rafael; Marques, Alexandre Carriconde; Reichert, Felipe Fossati
2017-05-19
Low level of physical activity is a serious health issue in individuals with visual impairment. Few studies have objectively measured physical activity in this population group, particularly outside high-income countries. The aim of this study was to describe physical activity measured by accelerometry and its associated factors in Brazilian adults with visual impairment. In a cross-sectional design, 90 adults (18-95 years old) answered a questionnaire and wore an accelerometer for at least 3 days (including one weekend day) to measure physical activity (min/day). Sixty percent of the individuals practiced at least 30 min/day of moderate-to-vigorous physical activity. Individuals who were blind were less active, spent more time in sedentary activities and spent less time in moderate and vigorous activities than those with low vision. Individuals who walked mainly without any assistance were more active, spent less time in sedentary activities and spent more time in light and moderate activities than those who walked with a long cane or sighted guide. Our data highlight factors associated with lower levels of physical activity in people with visual impairment. These factors, such as being blind and walking without assistance should be tackled in interventions to increase physical activity levels among visual impairment individuals. Implications for Rehabilitation Physical inactivity worldwide is a serious health issue in people with visual impairments and specialized institutions and public policies must work to increase physical activity level of this population. Those with lower visual acuity and walking with any aid are at a higher risk of having low levels of physical activity. The association between visual response profile, living for less than 11 years with visual impairment and PA levels deserves further investigations Findings of the present study provide reliable data to support rehabilitation programs, observing the need of taking special attention to the subgroups that are even more likely to be inactive.
Fast visual prediction and slow optimization of preferred walking speed.
O'Connor, Shawn M; Donelan, J Maxwell
2012-05-01
People prefer walking speeds that minimize energetic cost. This may be accomplished by directly sensing metabolic rate and adapting gait to minimize it, but only slowly due to the compounded effects of sensing delays and iterative convergence. Visual and other sensory information is available more rapidly and could help predict which gait changes reduce energetic cost, but only approximately because it relies on prior experience and an indirect means to achieve economy. We used virtual reality to manipulate visually presented speed while 10 healthy subjects freely walked on a self-paced treadmill to test whether the nervous system beneficially combines these two mechanisms. Rather than manipulating the speed of visual flow directly, we coupled it to the walking speed selected by the subject and then manipulated the ratio between these two speeds. We then quantified the dynamics of walking speed adjustments in response to perturbations of the visual speed. For step changes in visual speed, subjects responded with rapid speed adjustments (lasting <2 s) and in a direction opposite to the perturbation and consistent with returning the visually presented speed toward their preferred walking speed, when visual speed was suddenly twice (one-half) the walking speed, subjects decreased (increased) their speed. Subjects did not maintain the new speed but instead gradually returned toward the speed preferred before the perturbation (lasting >300 s). The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that seeks to minimize energetic cost.
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Seeing tones and hearing rectangles - Attending to simultaneous auditory and visual events
NASA Technical Reports Server (NTRS)
Casper, Patricia A.; Kantowitz, Barry H.
1985-01-01
The allocation of attention in dual-task situations depends on both the overall and the momentary demands associated with both tasks. Subjects in an inclusive- or reaction-time task responded to changes in simultaneous sequences of discrete auditory and visual stimuli. Performance on individual trials was affected by (1) the ratio of stimuli in the two tasks, (2) response demands of the two tasks, and (3) patterns inherent in the demands of one task.
The correlation dimension: a useful objective measure of the transient visual evoked potential?
Boon, Mei Ying; Henry, Bruce I; Suttle, Catherine M; Dain, Stephen J
2008-01-14
Visual evoked potentials (VEPs) may be analyzed by examination of the morphology of their components, such as negative (N) and positive (P) peaks. However, methods that rely on component identification may be unreliable when dealing with responses of complex and variable morphology; therefore, objective methods are also useful. One potentially useful measure of the VEP is the correlation dimension. Its relevance to the visual system was investigated by examining its behavior when applied to the transient VEP in response to a range of chromatic contrasts (42%, two times psychophysical threshold, at psychophysical threshold) and to the visually unevoked response (zero contrast). Tests of nonlinearity (e.g., surrogate testing) were conducted. The correlation dimension was found to be negatively correlated with a stimulus property (chromatic contrast) and a known linear measure (the Fourier-derived VEP amplitude). It was also found to be related to visibility and perception of the stimulus such that the dimension reached a maximum for most of the participants at psychophysical threshold. The latter suggests that the correlation dimension may be useful as a diagnostic parameter to estimate psychophysical threshold and may find application in the objective screening and monitoring of congenital and acquired color vision deficiencies, with or without associated disease processes.
Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio
2015-01-01
Objective Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. Conclusion We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. PMID:25709406
Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio
2015-01-01
Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual's capacity to drive safely. The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05). Only 21% of the subjects were able to perform all four tests correctly. We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely.
Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.
Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L
2017-10-01
Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.
Public health nurse perceptions of Omaha System data visualization.
Lee, Seonah; Kim, Era; Monsen, Karen A
2015-10-01
Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Sewell, David K; Lilburn, Simon D; Smith, Philip L
2016-11-01
A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Buchan, Julie N; Munhall, Kevin G
2012-01-01
Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
Bressler, David W.; Fortenbaugh, Francesca C.; Robertson, Lynn C.; Silver, Michael A.
2013-01-01
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. PMID:23562388
Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno
2014-01-01
Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432
Responses to single photons in visual cells of Limulus
Borsellino, A.; Fuortes, M. G. F.
1968-01-01
1. A system proposed in a previous article as a model of responses of visual cells has been analysed with the purpose of predicting the features of responses to single absorbed photons. 2. As a result of this analysis, the stochastic variability of responses has been expressed as a function of the amplification of the system. 3. The theoretical predictions have been compared to the results obtained by recording electrical responses of visual cells of Limulus to flashes delivering only few photons. 4. Experimental responses to single photons have been tentatively identified and it was shown that the stochastic variability of these responses is similar to that predicted for a model with a multiplication factor of at least twenty-five. 5. These results lead to the conclusion that the processes responsible for visual responses incorporate some form of amplification. This conclusion may prove useful for identifying the physical mechanisms underlying the transducer action of visual cells. PMID:5664231
Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement
Hu, Bo; Knill, David C.
2012-01-01
Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567
Donohue, Sarah E.; Liotti, Mario; Perez, Rick; Woldorff, Marty G.
2011-01-01
The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal. PMID:21964643
Are visual cue masking and removal techniques equivalent for studying perceptual skills in sport?
Mecheri, Sami; Gillet, Eric; Thouvarecq, Regis; Leroy, David
2011-01-01
The spatial-occlusion paradigm makes use of two techniques (masking and removing visual cues) to provide information about the anticipatory cues used by viewers. The visual scene resulting from the removal technique appears to be incongruous, but the assumed equivalence of these two techniques is spreading. The present study was designed to address this issue by combining eye-movement recording with the two types of occlusion (removal versus masking) in a tennis serve-return task. Response accuracy and decision onsets were analysed. The results indicated that subjects had longer reaction times under the removal condition, with an identical proportion of correct responses. Also, the removal technique caused the subjects to rely on atypical search patterns. Our findings suggest that, when the removal technique was used, viewers were unable to systematically count on stored memories to help them accomplish the interception task. The persistent failure to question some of the assumptions about the removal technique in applied visual research is highlighted, and suggestions for continued use of the masking technique are advanced.
Using video playbacks to study visual communication in a marine fish, Salaria pavo.
Gonçalves; Oliveira; Körner; Poschadel; Schlupp
2000-09-01
Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.
Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295
Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J
2017-01-01
The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.
Neural Correlates of Expert Visuomotor Performance in Badminton Players.
Hülsdünker, Thorben; Strüder, Heiko K; Mierau, Andreas
2016-11-01
Elite/skilled athletes participating in sports that require the initiation of targeted movements in response to visual cues under critical time pressure typically outperform nonathletes in a visuomotor reaction task. However, the exact physiological mechanisms of this advantage remain unclear. Therefore, this study aimed to determine the neurophysiological processes contributing to superior visuomotor performance in athletes using visual evoked potential (VEP). Central and peripheral determinants of visuomotor reaction time were investigated in 15 skilled badminton players and 28 age-matched nonathletic controls. To determine the speed of visual signal perception in the cortex, chromatic and achromatic pattern reversal stimuli were presented, and VEP values were recorded with a 64-channel EEG system. Further, a simple visuomotor reaction task was performed to investigate the transformation of the visual into a motor signal in the brain as well as the timing of muscular activation. Amplitude and latency of VEP (N75, P100, and N145) revealed that the athletes did not significantly differ from the nonathletes. However, visuomotor reaction time was significantly reduced in the athletes compared with nonathletes (athletes = 234.9 ms, nonathletes = 260.3 ms, P = 0.015). This was accompanied by an earlier activation of the premotor and supplementary motor areas (athletes = 163.9 ms, nonathletes = 199.1 ms, P = 0.015) as well as an earlier EMG onset (athletes = 167.5 ms, nonathletes = 206.5 ms, P < 0.001). The latency of premotor and supplementary motor area activation was correlated with EMG onset (r = 0.41) and visuomotor reaction time (r = 0.43). The results of this study indicate that superior visuomotor performance in athletes originates from faster visuomotor transformation in the premotor and supplementary motor cortical regions rather than from earlier perception of visual signals in the visual cortex.
Zeitoun, Jack H.; Kim, Hyungtae
2017-01-01
Binocular mechanisms for visual processing are thought to enhance spatial acuity by combining matched input from the two eyes. Studies in the primary visual cortex of carnivores and primates have confirmed that eye-specific neuronal response properties are largely matched. In recent years, the mouse has emerged as a prominent model for binocular visual processing, yet little is known about the spatial frequency tuning of binocular responses in mouse visual cortex. Using calcium imaging in awake mice of both sexes, we show that the spatial frequency preference of cortical responses to the contralateral eye is ∼35% higher than responses to the ipsilateral eye. Furthermore, we find that neurons in binocular visual cortex that respond only to the contralateral eye are tuned to higher spatial frequencies. Binocular neurons that are well matched in spatial frequency preference are also matched in orientation preference. In contrast, we observe that binocularly mismatched cells are more mismatched in orientation tuning. Furthermore, we find that contralateral responses are more direction-selective than ipsilateral responses and are strongly biased to the cardinal directions. The contralateral bias of high spatial frequency tuning was found in both awake and anesthetized recordings. The distinct properties of contralateral cortical responses may reflect the functional segregation of direction-selective, high spatial frequency-preferring neurons in earlier stages of the central visual pathway. Moreover, these results suggest that the development of binocularity and visual acuity may engage distinct circuits in the mouse visual system. SIGNIFICANCE STATEMENT Seeing through two eyes is thought to improve visual acuity by enhancing sensitivity to fine edges. Using calcium imaging of cellular responses in awake mice, we find surprising asymmetries in the spatial processing of eye-specific visual input in binocular primary visual cortex. The contralateral visual pathway is tuned to higher spatial frequencies than the ipsilateral pathway. At the highest spatial frequencies, the contralateral pathway strongly prefers to respond to visual stimuli along the cardinal (horizontal and vertical) axes. These results suggest that monocular, and not binocular, mechanisms set the limit of spatial acuity in mice. Furthermore, they suggest that the development of visual acuity and binocularity in mice involves different circuits. PMID:28924011
Chiszar, David; Krauss, Susan; Shipley, Bryon; Trout, Tim; Smith, Hobart M
2009-01-01
Five hatchling Komodo Dragons (Varanus komodoensis) at Denver Zoo were observed in two experiments that studied the effects of visual and chemical cues arising from prey. Rate of tongue flicking was recorded in Experiment 1, and amount of time the lizards spent interacting with stimuli was recorded in Experiment 2. Our hypothesis was that young V. komodoensis would be more dependent upon vision than chemoreception, especially when dealing with live, moving, prey. Although visual cues, including prey motion, had a significant effect, chemical cues had a far stronger effect. Implications of this falsification of our initial hypothesis are discussed.
Likova, Lora T.
2012-01-01
In a memory-guided drawing task under blindfolded conditions, we have recently used functional Magnetic Resonance Imaging (fMRI) to demonstrate that the primary visual cortex (V1) may operate as the visuo-spatial buffer, or “sketchpad,” for working memory. The results implied, however, a modality-independent or amodal form of its operation. In the present study, to validate the role of V1 in non-visual memory, we eliminated not only the visual input but all levels of visual processing by replicating the paradigm in a congenitally blind individual. Our novel Cognitive-Kinesthetic method was used to train this totally blind subject to draw complex images guided solely by tactile memory. Control tasks of tactile exploration and memorization of the image to be drawn, and memory-free scribbling were also included. FMRI was run before training and after training. Remarkably, V1 of this congenitally blind individual, which before training exhibited noisy, immature, and non-specific responses, after training produced full-fledged response time-courses specific to the tactile-memory drawing task. The results reveal the operation of a rapid training-based plasticity mechanism that recruits the resources of V1 in the process of learning to draw. The learning paradigm allowed us to investigate for the first time the evolution of plastic re-assignment in V1 in a congenitally blind subject. These findings are consistent with a non-visual memory involvement of V1, and specifically imply that the observed cortical reorganization can be empowered by the process of learning to draw. PMID:22593738
Likova, Lora T
2012-01-01
In a memory-guided drawing task under blindfolded conditions, we have recently used functional Magnetic Resonance Imaging (fMRI) to demonstrate that the primary visual cortex (V1) may operate as the visuo-spatial buffer, or "sketchpad," for working memory. The results implied, however, a modality-independent or amodal form of its operation. In the present study, to validate the role of V1 in non-visual memory, we eliminated not only the visual input but all levels of visual processing by replicating the paradigm in a congenitally blind individual. Our novel Cognitive-Kinesthetic method was used to train this totally blind subject to draw complex images guided solely by tactile memory. Control tasks of tactile exploration and memorization of the image to be drawn, and memory-free scribbling were also included. FMRI was run before training and after training. Remarkably, V1 of this congenitally blind individual, which before training exhibited noisy, immature, and non-specific responses, after training produced full-fledged response time-courses specific to the tactile-memory drawing task. The results reveal the operation of a rapid training-based plasticity mechanism that recruits the resources of V1 in the process of learning to draw. The learning paradigm allowed us to investigate for the first time the evolution of plastic re-assignment in V1 in a congenitally blind subject. These findings are consistent with a non-visual memory involvement of V1, and specifically imply that the observed cortical reorganization can be empowered by the process of learning to draw.
Perception-action coupling and anticipatory performance in baseball batting.
Ranganathan, Rajiv; Carlton, Les G
2007-09-01
The authors examined 10 expert and 10 novice baseball batters' ability to distinguish between a fastball and a change-up in a virtual environment. They used 2 different response modes: (a) an uncoupled response in which the batters verbally predicted the type of pitch and (b) a coupled response in which the batters swung a baseball bat to try and hit the virtual ball. The authors manipulated visual information from the pitcher and ball in 6 visual conditions. The batters were more accurate in predicting the type of pitch when the response was uncoupled. In coupled responses, experts were better able to use the first 100 ms of ball flight independently of the pitcher's kinematics. In addition, the skilled batters' stepping patterns were related to the pitcher's kinematics, whereas their swing time was related to ball speed. Those findings suggest that specific task requirements determine whether a highly coupled perception-action environment improves anticipatory performance. The authors also highlight the need for research on interceptive actions to be conducted in the performer's natural environment.
Dynamics of normalization underlying masking in human visual cortex.
Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M
2012-02-22
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
Modulation of early cortical processing during divided attention to non-contiguous locations.
Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J
2014-05-01
We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Local Diversity and Fine-Scale Organization of Receptive Fields in Mouse Visual Cortex
Histed, Mark H.; Yurgenson, Sergey
2011-01-01
Many thousands of cortical neurons are activated by any single sensory stimulus, but the organization of these populations is poorly understood. For example, are neurons in mouse visual cortex—whose preferred orientations are arranged randomly—organized with respect to other response properties? Using high-speed in vivo two-photon calcium imaging, we characterized the receptive fields of up to 100 excitatory and inhibitory neurons in a 200 μm imaged plane. Inhibitory neurons had nonlinearly summating, complex-like receptive fields and were weakly tuned for orientation. Excitatory neurons had linear, simple receptive fields that can be studied with noise stimuli and system identification methods. We developed a wavelet stimulus that evoked rich population responses and yielded the detailed spatial receptive fields of most excitatory neurons in a plane. Receptive fields and visual responses were locally highly diverse, with nearby neurons having largely dissimilar receptive fields and response time courses. Receptive-field diversity was consistent with a nearly random sampling of orientation, spatial phase, and retinotopic position. Retinotopic positions varied locally on average by approximately half the receptive-field size. Nonetheless, the retinotopic progression across the cortex could be demonstrated at the scale of 100 μm, with a magnification of ∼10 μm/°. Receptive-field and response similarity were in register, decreasing by 50% over a distance of 200 μm. Together, the results indicate considerable randomness in local populations of mouse visual cortical neurons, with retinotopy as the principal source of organization at the scale of hundreds of micrometers. PMID:22171051
Visualization of protein interaction networks: problems and solutions
2013-01-01
Background Visualization concerns the representation of data visually and is an important task in scientific research. Protein-protein interactions (PPI) are discovered using either wet lab techniques, such mass spectrometry, or in silico predictions tools, resulting in large collections of interactions stored in specialized databases. The set of all interactions of an organism forms a protein-protein interaction network (PIN) and is an important tool for studying the behaviour of the cell machinery. Since graphic representation of PINs may highlight important substructures, e.g. protein complexes, visualization is more and more used to study the underlying graph structure of PINs. Although graphs are well known data structures, there are different open problems regarding PINs visualization: the high number of nodes and connections, the heterogeneity of nodes (proteins) and edges (interactions), the possibility to annotate proteins and interactions with biological information extracted by ontologies (e.g. Gene Ontology) that enriches the PINs with semantic information, but complicates their visualization. Methods In these last years many software tools for the visualization of PINs have been developed. Initially thought for visualization only, some of them have been successively enriched with new functions for PPI data management and PIN analysis. The paper analyzes the main software tools for PINs visualization considering four main criteria: (i) technology, i.e. availability/license of the software and supported OS (Operating System) platforms; (ii) interoperability, i.e. ability to import/export networks in various formats, ability to export data in a graphic format, extensibility of the system, e.g. through plug-ins; (iii) visualization, i.e. supported layout and rendering algorithms and availability of parallel implementation; (iv) analysis, i.e. availability of network analysis functions, such as clustering or mining of the graph, and the possibility to interact with external databases. Results Currently, many tools are available and it is not easy for the users choosing one of them. Some tools offer sophisticated 2D and 3D network visualization making available many layout algorithms, others tools are more data-oriented and support integration of interaction data coming from different sources and data annotation. Finally, some specialistic tools are dedicated to the analysis of pathways and cellular processes and are oriented toward systems biology studies, where the dynamic aspects of the processes being studied are central. Conclusion A current trend is the deployment of open, extensible visualization tools (e.g. Cytoscape), that may be incrementally enriched by the interactomics community with novel and more powerful functions for PIN analysis, through the development of plug-ins. On the other hand, another emerging trend regards the efficient and parallel implementation of the visualization engine that may provide high interactivity and near real-time response time, as in NAViGaTOR. From a technological point of view, open-source, free and extensible tools, like Cytoscape, guarantee a long term sustainability due to the largeness of the developers and users communities, and provide a great flexibility since new functions are continuously added by the developer community through new plug-ins, but the emerging parallel, often closed-source tools like NAViGaTOR, can offer near real-time response time also in the analysis of very huge PINs. PMID:23368786
Using Automated Network Detection & Response to Visualize Malicious IT
answer. So I'm going to target talking for about 40 minutes, and leave some time left over for question forced operators to switch to manual mode. The media at the time reported on it, and determined that a available online. I would encourage you, if you have not read it, to please take time to read this and to
Meng, Jing; Li, Zuoshan; Shen, Lin
2017-01-01
Individuals with autism-spectrum disorder (ASD) exhibit impairments in response to others’ pain. Evidence suggests that features of autism are not restricted to individuals with ASD, and that autistic traits vary throughout the general population. To investigate the association between autistic traits and the responses to others’ pain in typically developing adults, we employed the Autism-Spectrum Quotient (AQ) to quantify autistic traits in a group of 1670 healthy adults and explored whether 60 participants (30 males and 30 females) with 10% highest AQ scores (High-AQ) would exhibit difficulties in the responses to others’ pain relative to 60 participants (30 males and 30 females) with 10% lowest AQ scores (Low-AQ). This study included a Visual Task and an Auditory Task to test behavioral differences between High-AQ and Low-AQ groups’ responses to others’ pain in both modalities. For the Visual Task, participants were instructed to respond to pictures depicting others’ pain. They were instructed to judge the stimuli type (painful or not), judge others’ pain intensity, and indicate the unpleasantness they personally felt. For the Auditory Task, experimental procedures were identical to the Visual Task except that painful voices were added. Results showed the High-AQ group was less accurate than the Low-AQ group in judging others’ pain. Moreover, relative to Low-AQ males, High-AQ males had significantly longer reaction times in judging others’ pain in the Auditory Task. However, High-AQ and Low-AQ females showed similar reaction times in both tasks. These findings demonstrated identification of others’ pain by healthy adults is related to the extent of autistic traits, gender, and modality. PMID:28319204
Meng, Jing; Li, Zuoshan; Shen, Lin
2017-01-01
Individuals with autism-spectrum disorder (ASD) exhibit impairments in response to others' pain. Evidence suggests that features of autism are not restricted to individuals with ASD, and that autistic traits vary throughout the general population. To investigate the association between autistic traits and the responses to others' pain in typically developing adults, we employed the Autism-Spectrum Quotient (AQ) to quantify autistic traits in a group of 1670 healthy adults and explored whether 60 participants (30 males and 30 females) with 10% highest AQ scores (High-AQ) would exhibit difficulties in the responses to others' pain relative to 60 participants (30 males and 30 females) with 10% lowest AQ scores (Low-AQ). This study included a Visual Task and an Auditory Task to test behavioral differences between High-AQ and Low-AQ groups' responses to others' pain in both modalities. For the Visual Task, participants were instructed to respond to pictures depicting others' pain. They were instructed to judge the stimuli type (painful or not), judge others' pain intensity, and indicate the unpleasantness they personally felt. For the Auditory Task, experimental procedures were identical to the Visual Task except that painful voices were added. Results showed the High-AQ group was less accurate than the Low-AQ group in judging others' pain. Moreover, relative to Low-AQ males, High-AQ males had significantly longer reaction times in judging others' pain in the Auditory Task. However, High-AQ and Low-AQ females showed similar reaction times in both tasks. These findings demonstrated identification of others' pain by healthy adults is related to the extent of autistic traits, gender, and modality.
Louw, Tyron; Markkula, Gustav; Boer, Erwin; Madigan, Ruth; Carsten, Oliver; Merat, Natasha
2017-11-01
This driving simulator study, conducted as part of the EU AdaptIVe project, investigated drivers' performance in critical traffic events, during the resumption of control from an automated driving system. Prior to the critical events, using a between-participant design, 75 drivers were exposed to various screen manipulations that varied the amount of available visual information from the road environment and automation state, which aimed to take them progressively further 'out-of-the-loop' (OoTL). The current paper presents an analysis of the timing, type, and rate of drivers' collision avoidance response, also investigating how these were influenced by the criticality of the unfolding situation. Results showed that the amount of visual information available to drivers during automation impacted on how quickly they resumed manual control, with less information associated with slower take-over times, however, this did not influence the timing of when drivers began a collision avoidance manoeuvre. Instead, the observed behaviour is in line with recent accounts emphasising the role of scenario kinematics in the timing of driver avoidance response. When considering collision incidents in particular, avoidance manoeuvres were initiated when the situation criticality exceeded an Inverse Time To Collision value of ≈0.3s -1 . Our results suggest that take-over time and timing and quality of avoidance response appear to be largely independent, and while long take-over time did not predict collision outcome, kinematically late initiation of avoidance did. Hence, system design should focus on achieving kinematically early avoidance initiation, rather than short take-over times. Copyright © 2017 Elsevier Ltd. All rights reserved.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Mechanisms of migraine aura revealed by functional MRI in human visual cortex
Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.
2001-01-01
Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655
Sensory determinants of the autonomous sensory meridian response (ASMR): understanding the triggers.
Barratt, Emma L; Spence, Charles; Davis, Nick J
2017-01-01
The autonomous sensory meridian response (ASMR) is an atypical sensory phenomenon involving electrostatic-like tingling sensations in response to certain sensory, primarily audio-visual, stimuli. The current study used an online questionnaire, completed by 130 people who self-reported experiencing ASMR. We aimed to extend preliminary investigations into the experience, and establish key multisensory factors contributing to the successful induction of ASMR through online media. Aspects such as timing and trigger load, atmosphere, and characteristics of ASMR content, ideal spatial distance from various types of stimuli, visual characteristics, context and use of ASMR triggers, and audio preferences are explored. Lower-pitched, complex sounds were found to be especially effective triggers, as were slow-paced, detail-focused videos. Conversely, background music inhibited the sensation for many respondents. These results will help in designing media for ASMR induction.
Sensory determinants of the autonomous sensory meridian response (ASMR): understanding the triggers
Barratt, Emma L.; Spence, Charles
2017-01-01
The autonomous sensory meridian response (ASMR) is an atypical sensory phenomenon involving electrostatic-like tingling sensations in response to certain sensory, primarily audio-visual, stimuli. The current study used an online questionnaire, completed by 130 people who self-reported experiencing ASMR. We aimed to extend preliminary investigations into the experience, and establish key multisensory factors contributing to the successful induction of ASMR through online media. Aspects such as timing and trigger load, atmosphere, and characteristics of ASMR content, ideal spatial distance from various types of stimuli, visual characteristics, context and use of ASMR triggers, and audio preferences are explored. Lower-pitched, complex sounds were found to be especially effective triggers, as were slow-paced, detail-focused videos. Conversely, background music inhibited the sensation for many respondents. These results will help in designing media for ASMR induction. PMID:29018601
Voluntary control of arm movement in athetotic patients
Neilson, Peter D.
1974-01-01
Visual tracking tests have been employed to provide a quantitative description of voluntary control of arm movement in a group of patients suffering from athetoid cerebral palsy. Voluntary control was impaired in all patients in a characteristic manner. Maximum velocity and acceleration of arm movement were reduced to about 30-50% of their values in normal subjects and the time lag of the response to a visual stimulus was two or three times greater than in normals. Tracking transmission characteristics indicated a degree of underdamping which was not presnet in normal or spastic patients. This underdamping could be responsible for a low frequency (0·3-0·6 Hz) transient oscillation in elbow-angle movements associated with sudden voluntary movement. The maximum frequency at which patients could produce a coherent tracking response was only 50% of that in normal subjects and the relationship between the electromyogram and muscle contraction indicated that the mechanical load on the biceps muscle was abnormal, possibly due to increased stiffness of joint movement caused by involuntary activity in agonist and antagonist muscles acting across the joint. Images PMID:4362243
Writing in the Air: Contributions of Finger Movement to Cognitive Processing
Itaguchi, Yoshihiro; Yamada, Chiharu; Fukuzawa, Kazuyoshi
2015-01-01
The present study investigated the interactions between motor action and cognitive processing with particular reference to kanji-culture individuals. Kanji-culture individuals often move their finger as if they are writing when they are solving cognitive tasks, for example, when they try to recall the spelling of English words. This behavior is called kusho, meaning air-writing in Japanese. However, its functional role is still unknown. To reveal the role of kusho behavior in cognitive processing, we conducted a series of experiments, employing two different cognitive tasks, a construction task and a stroke count task. To distinguish the effects of the kinetic aspects of kusho behavior, we set three hand conditions in the tasks; participants were instructed to use either kusho, unrelated finger movements or do nothing during the response time. To isolate possible visual effects, two visual conditions in which participants saw their hand and the other in which they did not, were introduced. We used the number of correct responses and response time as measures of the task performance. The results showed that kusho behavior has different functional roles in the two types of cognitive tasks. In the construction task, the visual feedback from finger movement facilitated identifying a character, whereas the kinetic feedback or motor commands for the behavior did not help to solve the task. In the stroke count task, by contrast, the kinetic aspects of the finger movements influenced counting performance depending on the type of the finger movement. Regardless of the visual condition, kusho behavior improved task performance and unrelated finger movements degraded it. These results indicated that motor behavior contributes to cognitive processes. We discussed possible mechanisms of the modality dependent contribution. These findings might lead to better understanding of the complex interaction between action and cognition in daily life. PMID:26061273
Writing in the Air: Contributions of Finger Movement to Cognitive Processing.
Itaguchi, Yoshihiro; Yamada, Chiharu; Fukuzawa, Kazuyoshi
2015-01-01
The present study investigated the interactions between motor action and cognitive processing with particular reference to kanji-culture individuals. Kanji-culture individuals often move their finger as if they are writing when they are solving cognitive tasks, for example, when they try to recall the spelling of English words. This behavior is called kusho, meaning air-writing in Japanese. However, its functional role is still unknown. To reveal the role of kusho behavior in cognitive processing, we conducted a series of experiments, employing two different cognitive tasks, a construction task and a stroke count task. To distinguish the effects of the kinetic aspects of kusho behavior, we set three hand conditions in the tasks; participants were instructed to use either kusho, unrelated finger movements or do nothing during the response time. To isolate possible visual effects, two visual conditions in which participants saw their hand and the other in which they did not, were introduced. We used the number of correct responses and response time as measures of the task performance. The results showed that kusho behavior has different functional roles in the two types of cognitive tasks. In the construction task, the visual feedback from finger movement facilitated identifying a character, whereas the kinetic feedback or motor commands for the behavior did not help to solve the task. In the stroke count task, by contrast, the kinetic aspects of the finger movements influenced counting performance depending on the type of the finger movement. Regardless of the visual condition, kusho behavior improved task performance and unrelated finger movements degraded it. These results indicated that motor behavior contributes to cognitive processes. We discussed possible mechanisms of the modality dependent contribution. These findings might lead to better understanding of the complex interaction between action and cognition in daily life.
Sex Differences in Response to Visual Sexual Stimuli: A Review
Rupp, Heather A.; Wallen, Kim
2009-01-01
This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women’s response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences. PMID:17668311
Brain-Stimulation Induced Blindsight: Unconscious Vision or Response Bias?
Lloyd, David A.; Abrahamyan, Arman; Harris, Justin A.
2013-01-01
A dissociation between visual awareness and visual discrimination is referred to as “blindsight”. Blindsight results from loss of function of the primary visual cortex (V1) which can occur due to cerebrovascular accidents (i.e. stroke-related lesions). There are also numerous reports of similar, though reversible, effects on vision induced by transcranial Magnetic Stimulation (TMS) to early visual cortex. These effects point to V1 as the “gate” of visual awareness and have strong implications for understanding the neurological underpinnings of consciousness. It has been argued that evidence for the dissociation between awareness of, and responses to, visual stimuli can be a measurement artifact of the use of a high response criterion under yes-no measures of visual awareness when compared with the criterion free forced-choice responses. This difference between yes-no and forced-choice measures suggests that evidence for a dissociation may actually be normal near-threshold conscious vision. Here we describe three experiments that tested visual performance in normal subjects when their visual awareness was suppressed by applying TMS to the occipital pole. The nature of subjects’ performance whilst undergoing occipital TMS was then verified by use of a psychophysical measure (d') that is independent of response criteria. This showed that there was no genuine dissociation in visual sensitivity measured by yes-no and forced-choice responses. These results highlight that evidence for visual sensitivity in the absence of awareness must be analysed using a bias-free psychophysical measure, such as d', In order to confirm whether or not visual performance is truly unconscious. PMID:24324837
Brain-stimulation induced blindsight: unconscious vision or response bias?
Lloyd, David A; Abrahamyan, Arman; Harris, Justin A
2013-01-01
A dissociation between visual awareness and visual discrimination is referred to as "blindsight". Blindsight results from loss of function of the primary visual cortex (V1) which can occur due to cerebrovascular accidents (i.e. stroke-related lesions). There are also numerous reports of similar, though reversible, effects on vision induced by transcranial Magnetic Stimulation (TMS) to early visual cortex. These effects point to V1 as the "gate" of visual awareness and have strong implications for understanding the neurological underpinnings of consciousness. It has been argued that evidence for the dissociation between awareness of, and responses to, visual stimuli can be a measurement artifact of the use of a high response criterion under yes-no measures of visual awareness when compared with the criterion free forced-choice responses. This difference between yes-no and forced-choice measures suggests that evidence for a dissociation may actually be normal near-threshold conscious vision. Here we describe three experiments that tested visual performance in normal subjects when their visual awareness was suppressed by applying TMS to the occipital pole. The nature of subjects' performance whilst undergoing occipital TMS was then verified by use of a psychophysical measure (d') that is independent of response criteria. This showed that there was no genuine dissociation in visual sensitivity measured by yes-no and forced-choice responses. These results highlight that evidence for visual sensitivity in the absence of awareness must be analysed using a bias-free psychophysical measure, such as d', In order to confirm whether or not visual performance is truly unconscious.
Arnoni-Bauer, Yardena; Bick, Atira; Raz, Noa; Imbar, Tal; Amos, Shoshana; Agmon, Orly; Marko, Limor; Levin, Netta; Weiss, Ram
2017-09-01
Homeostatic energy balance is controlled via the hypothalamus, whereas regions controlling reward and cognitive decision-making are critical for hedonic eating. Eating varies across the menstrual cycle peaking at the midluteal phase. To test responses of females with regular cycles during midfollicular and midluteal phase and of users of monophasic oral contraception pills (OCPs) to visual food cues. Participants performed a functional magnetic resonance imaging while exposed to visual food cues in four time points: fasting and fed conditions in midfollicular and midluteal phases. Twenty females with regular cycles and 12 on monophasic OCP, aged 18 to 35 years. Activity in homeostatic (hypothalamus), reward (amygdala, putamen and insula), frontal (anterior cingulate cortex, dorsolateral prefrontal cortex), and visual regions (calcarine and lateral occipital cortex). Tertiary hospital. In females with regular cycles, brain regions associated with homeostasis but also the reward system, executive frontal areas, and afferent visual areas were activated to a greater degree during the luteal compared with the follicular phase. Within the visual areas, a dual effect of hormonal and prandial state was seen. In females on monophasic OCPs, characterized by a permanently elevated progesterone concentration, activity reminiscent of the luteal phase was found. Androgen, cortisol, testosterone, and insulin levels were significantly correlated with reward and visual region activation. Hormonal mechanisms affect the responses of women's homeostatic, emotional, and attentional brain regions to food cues. The relation of these findings to eating behavior throughout the cycle needs further investigation. Copyright © 2017 Endocrine Society
A Study on Attention Guidance to Driver by Subliminal Visual Information
NASA Astrophysics Data System (ADS)
Takahashi, Hiroshi; Honda, Hirohiko
This paper presents a new warning method for increasing drivers' sensitivity for recognizing hazardous factors in the driving environment. The method is based on a subliminal effect. The results of many experiments performed by three dimensional head-mounted display shows that the response time for detecting a flashing mark tended to decrease when a subliminal mark was shown in advance. Priming effects are observed in subliminal visual information. This paper also proposes a scenario for implementing this method in real vehicles.
Anti-pointing is mediated by a perceptual bias of target location in left and right visual space.
Heath, Matthew; Maraj, Anika; Gradkowski, Ashlee; Binsted, Gordon
2009-01-01
We sought to determine whether mirror-symmetrical limb movements (so-called anti-pointing) elicit a pattern of endpoint bias commensurate with perceptual judgments. In particular, we examined whether asymmetries related to the perceptual over- and under-estimation of target extent in respective left and right visual space impacts the trajectories of anti-pointing. In Experiment 1, participants completed direct (i.e. pro-pointing) and mirror-symmetrical (i.e. anti-pointing) responses to targets in left and right visual space with their right hand. In line with the anti-saccade literature, anti-pointing yielded longer reaction times than pro-pointing: a result suggesting increased top-down processing for the sensorimotor transformations underlying a mirror-symmetrical response. Most interestingly, pro-pointing yielded comparable endpoint accuracy in left and right visual space; however, anti-pointing produced an under- and overshooting bias in respective left and right visual space. In Experiment 2, we replicated the findings from Experiment 1 and further demonstrate that the endpoint bias of anti-pointing is independent of the reaching limb (i.e. left vs. right hand) and between-task differences in saccadic drive. We thus propose that the visual field-specific endpoint bias observed here is related to the cognitive (i.e. top-down) nature of anti-pointing and the corollary use of visuo-perceptual networks to support the sensorimotor transformations underlying such actions.
Richter, H O; Zetterberg, C; Forsman, M
2015-07-01
To investigate if trapezius muscle activity increases over time during visually demanding near work. The vision task consisted of sustained focusing on a contrast-varying black and white Gabor grating. Sixty-six participants with a median age of 38 (range 19-47) fixated the grating from a distance of 65 cm (1.5 D) during four counterbalanced 7-min periods: binocularly through -3.5 D lenses, and monocularly through -3.5 D, 0 D and +3.5 D. Accommodation, heart rate variability and trapezius muscle activity were recorded in parallel. General estimating equation analyses showed that trapezius muscle activity increased significantly over time in all four lens conditions. A concurrent effect of accommodation response on trapezius muscle activity was observed with the minus lenses irrespective of whether incongruence between accommodation and convergence was present or not. Trapezius muscle activity increased significantly over time during the near work task. The increase in muscle activity over time may be caused by an increased need of mental effort and visual attention to maintain performance during the visual tasks to counteract mental fatigue.
Bressler, David W; Fortenbaugh, Francesca C; Robertson, Lynn C; Silver, Michael A
2013-06-07
Endogenous visual spatial attention improves perception and enhances neural responses to visual stimuli at attended locations. Although many aspects of visual processing differ significantly between central and peripheral vision, little is known regarding the neural substrates of the eccentricity dependence of spatial attention effects. We measured amplitudes of positive and negative fMRI responses to visual stimuli as a function of eccentricity in a large number of topographically-organized cortical areas. Responses to each stimulus were obtained when the stimulus was attended and when spatial attention was directed to a stimulus in the opposite visual hemifield. Attending to the stimulus increased both positive and negative response amplitudes in all cortical areas we studied: V1, V2, V3, hV4, VO1, LO1, LO2, V3A/B, IPS0, TO1, and TO2. However, the eccentricity dependence of these effects differed considerably across cortical areas. In early visual, ventral, and lateral occipital cortex, attentional enhancement of positive responses was greater for central compared to peripheral eccentricities. The opposite pattern was observed in dorsal stream areas IPS0 and putative MT homolog TO1, where attentional enhancement of positive responses was greater in the periphery. Both the magnitude and the eccentricity dependence of attentional modulation of negative fMRI responses closely mirrored that of positive responses across cortical areas. Copyright © 2013 Elsevier Ltd. All rights reserved.
The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.
Coco, Moreno I; Malcolm, George L; Keller, Frank
2014-01-01
An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.
Laterality and performance of agility-trained dogs.
Siniscalchi, Marcello; Bertino, Daniele; Quaranta, Angelo
2014-01-01
Correlations between lateralised behaviour and performance were investigated in 19 agility-trained dogs (Canis familiaris) by scoring paw preference to hold a food object and relating it to performance during typical agility obstacles (jump/A-frame and weave poles). In addition, because recent behavioural studies reported that visual stimuli of emotional valence presented to one visual hemifield at a time affect visually guided motor responses in dogs, the possibility that the position of the owner respectively in the left and in the right canine visual hemifield might be associated with quality of performance during agility was considered. Dogs' temperament was also measured by an owner-rated questionnaire. The most relevant finding was that agility-trained dogs displayed longer latencies to complete the obstacles with the owner located in their left visual hemifield compared to the right. Interestingly, the results showed that this phenomenon was significantly linked to both dogs' trainability and the strength of paw preference.
Automatic classification of visual evoked potentials based on wavelet decomposition
NASA Astrophysics Data System (ADS)
Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz
2017-04-01
Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.
Adaptation to Laterally Displacing Prisms in Anisometropic Amblyopia.
Sklar, Jaime C; Goltz, Herbert C; Gane, Luke; Wong, Agnes M F
2015-06-01
Using visual feedback to modify sensorimotor output in response to changes in the external environment is essential for daily function. Prism adaptation is a well-established experimental paradigm to quantify sensorimotor adaptation; that is, how the sensorimotor system adapts to an optically-altered visuospatial environment. Amblyopia is a neurodevelopmental disorder characterized by spatiotemporal deficits in vision that impacts manual and oculomotor function. This study explored the effects of anisometropic amblyopia on prism adaptation. Eight participants with anisometropic amblyopia and 11 visually-normal adults, all right-handed, were tested. Participants pointed to visual targets and were presented with feedback of hand position near the terminus of limb movement in three blocks: baseline, adaptation, and deadaptation. Adaptation was induced by viewing with binocular 11.4° (20 prism diopter [PD]) left-shifting prisms. All tasks were performed during binocular viewing. Participants with anisometropic amblyopia required significantly more trials (i.e., increased time constant) to adapt to prismatic optical displacement than visually-normal controls. During the rapid error correction phase of adaptation, people with anisometropic amblyopia also exhibited greater variance in motor output than visually-normal controls. Amblyopia impacts on the ability to adapt the sensorimotor system to an optically-displaced visual environment. The increased time constant and greater variance in motor output during the rapid error correction phase of adaptation may indicate deficits in processing of visual information as a result of degraded spatiotemporal vision in amblyopia.
Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans
2017-03-20
From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object’s offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth’s gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects’ location. PMID:26910260
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.
Visual demand of curves and fog-limited sight distance and its relationship to brake response time.
DOT National Transportation Integrated Search
2006-05-01
Driver distraction is a major contributing factor to automobile crashes. National Highway Traffic Safety Administration (NHTSA) has estimated that approximately 25% of crashes are attributed to driver distraction and inattention (Wang, Knipling, & Go...
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Yener, Görsev G; Emek-Savaş, Derya Durusu; Güntekin, Bahar; Başar, Erol
2014-10-17
Mild Cognitive Impairment (MCI) is considered in many as prodromal stage of Alzheimer's disease (AD). Event-related oscillations (ERO) reflect cognitive responses of brain whereas sensory-evoked oscillations (SEO) inform about sensory responses. For this study, we compared visual SEO and ERO responses in MCI to explore brain dynamics (BACKGROUND). Forty-three patients with MCI (mean age=74.0 year) and 41 age- and education-matched healthy-elderly controls (HC) (mean age=71.1 year) participated in the study. The maximum peak-to-peak amplitudes for each subject's averaged delta response (0.5-3.0 Hz) were measured from two conditions (simple visual stimulation and classical visual oddball paradigm target stimulation) (METHOD). Overall, amplitudes of target ERO responses were higher than SEO amplitudes. The preferential location for maximum amplitude values was frontal lobe for ERO and occipital lobe for SEO. The ANOVA for delta responses showed significant results for the group Xparadigm. Post-hoc tests indicated that (1) the difference between groups were significant for target delta responses, but not for SEO, (2) ERO elicited higher responses for HC than MCI patients, and (3) females had higher target ERO than males and this difference was pronounced in the control group (RESULTS). Overall, cognitive responses display almost double the amplitudes of sensory responses over frontal regions. The topography of oscillatory responses differs depending on stimuli: visualsensory responses are highest over occipitals and -cognitive responses over frontal regions. A group effect is observed in MCI indicating that visual sensory and cognitive circuits behave differently indicating preserved visual sensory responses, but decreased cognitive responses (CONCLUSION). Copyright © 2014 Elsevier B.V. All rights reserved.
Attention Modulates TMS-Locked Alpha Oscillations in the Visual Cortex
Herring, Jim D.; Thut, Gregor; Jensen, Ole
2015-01-01
Cortical oscillations, such as 8–12 Hz alpha-band activity, are thought to subserve gating of information processing in the human brain. While most of the supporting evidence is correlational, causal evidence comes from attempts to externally drive (“entrain”) these oscillations by transcranial magnetic stimulation (TMS). Indeed, the frequency profile of TMS-evoked potentials (TEPs) closely resembles that of oscillations spontaneously emerging in the same brain region. However, it is unclear whether TMS-locked and spontaneous oscillations are produced by the same neuronal mechanisms. If so, they should react in a similar manner to top-down modulation by endogenous attention. To test this prediction, we assessed the alpha-like EEG response to TMS of the visual cortex during periods of high and low visual attention while participants attended to either the visual or auditory modality in a cross-modal attention task. We observed a TMS-locked local oscillatory alpha response lasting several cycles after TMS (but not after sham stimulation). Importantly, TMS-locked alpha power was suppressed during deployment of visual relative to auditory attention, mirroring spontaneous alpha amplitudes. In addition, the early N40 TEP component, located at the stimulation site, was amplified by visual attention. The extent of attentional modulation for both TMS-locked alpha power and N40 amplitude did depend, with opposite sign, on the individual ability to modulate spontaneous alpha power at the stimulation site. We therefore argue that TMS-locked and spontaneous oscillations are of common neurophysiological origin, whereas the N40 TEP component may serve as an index of current cortical excitability at the time of stimulation. SIGNIFICANCE STATEMENT Rhythmic transcranial magnetic stimulation (TMS) is a promising tool to experimentally “entrain” cortical activity. If TMS-locked oscillatory responses actually recruit the same neuronal mechanisms as spontaneous cortical oscillations, they qualify as a valid tool to study the causal role of neuronal oscillations in cognition but also to enable new treatments targeting aberrant oscillatory activity in, for example, neurological conditions. Here, we provide first-time evidence that TMS-locked and spontaneous oscillations are indeed tightly related and are likely to rely on the same neuronal generators. In addition, we demonstrate that an early local component of the TMS-evoked potential (the N40) may serve as a new objective and noninvasive probe of visual cortex excitability, which so far was only accessible via subjective phosphene reports. PMID:26511236
Stone, David B.; Coffman, Brian A.; Bustillo, Juan R.; Aine, Cheryl J.; Stephen, Julia M.
2014-01-01
Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia. PMID:25414652
Searching for biomarkers of CDKL5 disorder: early-onset visual impairment in CDKL5 mutant mice
Mazziotti, Raffaele; Lupori, Leonardo; Sagona, Giulia; Gennaro, Mariangela; Della Sala, Grazia; Putignano, Elena
2017-01-01
Abstract CDKL5 disorder is a neurodevelopmental disorder still without a cure. Murine models of CDKL5 disorder have been recently generated raising the possibility of preclinical testing of treatments. However, unbiased, quantitative biomarkers of high translational value to monitor brain function are still missing. Moreover, the analysis of treatment is hindered by the challenge of repeatedly and non-invasively testing neuronal function. We analyzed the development of visual responses in a mouse model of CDKL5 disorder to introduce visually evoked responses as a quantitative method to assess cortical circuit function. Cortical visual responses were assessed in CDKL5 null male mice, heterozygous females, and their respective control wild-type littermates by repeated transcranial optical imaging from P27 until P32. No difference between wild-type and mutant mice was present at P25-P26 whereas defective responses appeared from P27-P28 both in heterozygous and homozygous CDKL5 mutant mice. These results were confirmed by visually evoked potentials (VEPs) recorded from the visual cortex of a different cohort. The previously imaged mice were also analyzed at P60–80 using VEPs, revealing a persistent reduction of response amplitude, reduced visual acuity and defective contrast function. The level of adult impairment was significantly correlated with the reduction in visual responses observed during development. Support vector machine showed that multi-dimensional visual assessment can be used to automatically classify mutant and wt mice with high reliability. Thus, monitoring visual responses represents a promising biomarker for preclinical and clinical studies on CDKL5 disorder. PMID:28369421
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P
2014-03-01
Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
What are the Shapes of Response Time Distributions in Visual Search?
Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.
2011-01-01
Many visual search experiments measure reaction time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays in each of three classic search tasks: feature search, with the target defined by color; conjunction search, with the target defined by both color and orientation; and spatial configuration search for a 2 among distractor 5s. This large data set allows us to characterize the RT distributions in detail. We present the raw RT distributions and fit several psychologically motivated functions (ex-Gaussian, ex-Wald, Gamma, and Weibull) to the data. We analyze and interpret parameter trends from these four functions within the context of theories of visual search. PMID:21090905
Concurrent access to a virtual microscope using a web service oriented architecture
NASA Astrophysics Data System (ADS)
Corredor, Germán.; Iregui, Marcela; Arias, Viviana; Romero, Eduardo
2013-11-01
Virtual microscopy (VM) facilitates visualization and deployment of histopathological virtual slides (VS), a useful tool for education, research and diagnosis. In recent years, it has become popular, yet its use is still limited basically because of the very large sizes of VS, typically of the order of gigabytes. Such volume of data requires efficacious and efficient strategies to access the VS content. In an educative or research scenario, several users may require to access and interact with VS at the same time, so, due to large data size, a very expensive and powerful infrastructure is usually required. This article introduces a novel JPEG2000-based service oriented architecture for streaming and visualizing very large images under scalable strategies, which in addition need not require very specialized infrastructure. Results suggest that the proposed architecture enables transmission and simultaneous visualization of large images, while it is efficient using resources and offering users proper response times.
Selective Effect of Physical Fatigue on Motor Imagery Accuracy
Di Rienzo, Franck; Collet, Christian; Hoyek, Nady; Guillot, Aymeric
2012-01-01
While the use of motor imagery (the mental representation of an action without overt execution) during actual training sessions is usually recommended, experimental studies examining the effect of physical fatigue on subsequent motor imagery performance are sparse and yielded divergent findings. Here, we investigated whether physical fatigue occurring during an intense sport training session affected motor imagery ability. Twelve swimmers (nine males, mean age 15.5 years) conducted a 45 min physically-fatiguing protocol where they swam from 70% to 100% of their maximal aerobic speed. We tested motor imagery ability immediately before and after fatigue state. Participants randomly imagined performing a swim turn using internal and external visual imagery. Self-reports ratings, imagery times and electrodermal responses, an index of alertness from the autonomic nervous system, were the dependent variables. Self-reports ratings indicated that participants did not encounter difficulty when performing motor imagery after fatigue. However, motor imagery times were significantly shortened during posttest compared to both pretest and actual turn times, thus indicating reduced timing accuracy. Looking at the selective effect of physical fatigue on external visual imagery did not reveal any difference before and after fatigue, whereas significantly shorter imagined times and electrodermal responses (respectively 15% and 48% decrease, p<0.001) were observed during the posttest for internal visual imagery. A significant correlation (r = 0.64; p<0.05) was observed between motor imagery vividness (estimated through imagery questionnaire) and autonomic responses during motor imagery after fatigue. These data support that unlike local muscle fatigue, physical fatigue occurring during intense sport training sessions is likely to affect motor imagery accuracy. These results might be explained by the updating of the internal representation of the motor sequence, due to temporary feedback originating from actual motor practice under fatigue. These findings provide insights to the co-dependent relationship between mental and motor processes. PMID:23082148
Selective effect of physical fatigue on motor imagery accuracy.
Di Rienzo, Franck; Collet, Christian; Hoyek, Nady; Guillot, Aymeric
2012-01-01
While the use of motor imagery (the mental representation of an action without overt execution) during actual training sessions is usually recommended, experimental studies examining the effect of physical fatigue on subsequent motor imagery performance are sparse and yielded divergent findings. Here, we investigated whether physical fatigue occurring during an intense sport training session affected motor imagery ability. Twelve swimmers (nine males, mean age 15.5 years) conducted a 45 min physically-fatiguing protocol where they swam from 70% to 100% of their maximal aerobic speed. We tested motor imagery ability immediately before and after fatigue state. Participants randomly imagined performing a swim turn using internal and external visual imagery. Self-reports ratings, imagery times and electrodermal responses, an index of alertness from the autonomic nervous system, were the dependent variables. Self-reports ratings indicated that participants did not encounter difficulty when performing motor imagery after fatigue. However, motor imagery times were significantly shortened during posttest compared to both pretest and actual turn times, thus indicating reduced timing accuracy. Looking at the selective effect of physical fatigue on external visual imagery did not reveal any difference before and after fatigue, whereas significantly shorter imagined times and electrodermal responses (respectively 15% and 48% decrease, p<0.001) were observed during the posttest for internal visual imagery. A significant correlation (r=0.64; p<0.05) was observed between motor imagery vividness (estimated through imagery questionnaire) and autonomic responses during motor imagery after fatigue. These data support that unlike local muscle fatigue, physical fatigue occurring during intense sport training sessions is likely to affect motor imagery accuracy. These results might be explained by the updating of the internal representation of the motor sequence, due to temporary feedback originating from actual motor practice under fatigue. These findings provide insights to the co-dependent relationship between mental and motor processes.
McAuley, J D; Stewart, A L; Webber, E S; Cromwell, H C; Servatius, R J; Pang, K C H
2009-12-01
Inbred Wistar-Kyoto (WKY) rats have been proposed as a model of anxiety vulnerability as they display behavioral inhibition and a constellation of learning and reactivity abnormalities relative to outbred Sprague-Dawley (SD) rats. Together, the behaviors of the WKY rat suggest a hypervigilant state that may contribute to its anxiety vulnerability. To test this hypothesis, open-field behavior, acoustic startle, pre-pulse inhibition and timing behavior were assessed in WKY and Sprague-Dawley (SD) rats. Timing behavior was evaluated using a modified version of the peak-interval timing procedure. Training and testing of timing first occurred without audio-visual (AV) interference. Following this initial test, AV interference was included on some trials. Overall, WKY rats took much longer to leave the center of the arena, made fewer line crossings, and reared less, than did SD rats. WKY rats showed much greater startle responses to acoustic stimuli and significantly greater pre-pulse inhibition than did the SD rats. During timing conditions without AV interference, timing accuracy for both strains was similar; peak times for WKY and SD rats were not different. During interference conditions, however, the timing behavior of the two strains was very different. Whereas peak times for SD rats were similar between non-interference and interference conditions, peak times for WKY rats were shorter and response rates higher in interference conditions than in non-interference conditions. The enhanced acoustic startle response, greater prepulse inhibition and altered timing behavior with audio-visual interference supports a characterization of WKY strain as hypervigilant and provides further evidence for the use of the WKY strain as a model of anxiety vulnerability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclean, Jillian, E-mail: jillian.maclean@uclh.nhs.uk; Fersht, Naomi; Bremner, Fion
2013-03-15
Purpose: To evaluate ophthalmologic outcomes and toxicity of intensity modulated radiation therapy (IMRT) in patients with meningiomas causing visual deficits. Methods and Materials: A prospective observational study with formal ophthalmologic and clinical assessment of 30 consecutive cases of meningioma affecting vision treated with IMRT from 2007 to 2011. Prescriptions were 50.4 Gy to mean target dose in 28 daily fractions. The median follow-up time was 28 months. Twenty-six meningiomas affected the anterior visual pathway (including 3 optic nerve sheath meningiomas); 4 were posterior to the chiasm. Results: Vision improved objectively in 12 patients (40%). Improvements were in visual field (5/16more » patients), color vision (4/9 patients), acuity (1/15 patients), extraocular movements (3/11 patients), ptosis (1/5 patients), and proptosis (2/6 patients). No predictors of clinical response were found. Two patients had minor reductions in tumor dimensions on magnetic resonance imaging, 1 patient had radiological progression, and the other patients were stable. One patient experienced grade 2 keratitis, 1 patient had a minor visual field loss, and 5 patients had grade 1 dry eye. Conclusion: IMRT is an effective method for treating meningiomas causing ophthalmologic deficits, and toxicity is minimal. Thorough ophthalmologic assessment is important because clinical responses often occur in the absence of radiological change.« less
Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station
NASA Technical Reports Server (NTRS)
Kamine, Tovy Haber; Bendrick, Gregg A.
2008-01-01
Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. cones ) of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of Maximum Eye Movement. However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of Easy Eye Movement, though all were within the cone of Maximum Eye Movement. All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Most instrument displays in conventional aircraft lay within the cone of Easy Eye Movement, though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight. The learning objectives include: 1) Know three physiologic cones of eye/head movement; 2) Understand how instrument displays comply with these design principles in conventional aircraft and an uninhabited aerial vehicle system. Which of the following is NOT a recognized physiologic principle of instrument display design? Cone of Easy Eye Movement 2) Cone of Binocular Eye Movement 3) Cone of Maximum Eye Movement 4) Cone of Head Movement 5) None of the above. Answer: # 2) Cone of Binocular Eye Movement
Sadeh, Morteza; Sajad, Amirsaman; Wang, Hongying; Yan, Xiaogang; Crawford, John Douglas
2015-12-01
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Sztarker, Julieta; Tomsic, Daniel
2008-06-01
When confronted with predators, animals are forced to take crucial decisions such as the timing and manner of escape. In the case of the crab Chasmagnathus, cumulative evidence suggests that the escape response to a visual danger stimulus (VDS) can be accounted for by the response of a group of lobula giant (LG) neurons. To further investigate this hypothesis, we examined the relationship between behavioral and neuronal activities within a variety of experimental conditions that affected the level of escape. The intensity of the escape response to VDS was influenced by seasonal variations, changes in stimulus features, and whether the crab perceived stimuli monocularly or binocularly. These experimental conditions consistently affected the response of LG neurons in a way that closely matched the effects observed at the behavioral level. In other words, the intensity of the stimulus-elicited spike activity of LG neurons faithfully reflected the intensity of the escape response. These results support the idea that the LG neurons from the lobula of crabs are deeply involved in the decision for escaping from VDS.
Influence of Coactors on Saccadic and Manual Responses
Niehorster, Diederick C.; Jarodzka, Halszka; Holmqvist, Kenneth
2017-01-01
Two experiments were conducted to investigate the effects of coaction on saccadic and manual responses. Participants performed the experiments either in a solitary condition or in a group of coactors who performed the same tasks at the same time. In Experiment 1, participants completed a pro- and antisaccade task where they were required to make saccades towards (prosaccades) or away (antisaccades) from a peripheral visual stimulus. In Experiment 2, participants performed a visual discrimination task that required both making a saccade towards a peripheral stimulus and making a manual response in reaction to the stimulus’s orientation. The results showed that performance of stimulus-driven responses was independent of the social context, while volitionally controlled responses were delayed by the presence of coactors. These findings are in line with studies assessing the effect of attentional load on saccadic control during dual-task paradigms. In particular, antisaccades – but not prosaccades – were influenced by the type of social context. Additionally, the number of coactors present in the group had a moderating effect on both saccadic and manual responses. The results support an attentional view of social influences. PMID:28321288
Effect of ethanol on the visual-evoked potential in rat: dynamics of ON and OFF responses.
Dulinskas, Redas; Buisas, Rokas; Vengeliene, Valentina; Ruksenas, Osvaldas
2017-01-01
The effect of acute ethanol administration on the flash visual-evoked potential (VEP) was investigated in numerous studies. However, it is still unclear which brain structures are responsible for the differences observed in stimulus onset (ON) and offset (OFF) responses and how these responses are modulated by ethanol. The aim of our study was to investigate the pattern of ON and OFF responses in the visual system, measured as amplitude and latency of each VEP component following acute administration of ethanol. VEPs were recorded at the onset and offset of a 500 ms visual stimulus in anesthetized male Wistar rats. The effect of alcohol on VEP latency and amplitude was measured for one hour after injection of 2 g/kg ethanol dose. Three VEP components - N63, P89 and N143 - were analyzed. Our results showed that, except for component N143, ethanol increased the latency of both ON and OFF responses in a similar manner. The latency of N143 during OFF response was not affected by ethanol but its amplitude was reduced. Our study demonstrated that the activation of the visual system during the ON response to a 500 ms visual stimulus is qualitatively different from that during the OFF response. Ethanol interfered with processing of the stimulus duration at the level of the visual cortex and reduced the activation of cortical regions.
Auditory and visual interactions between the superior and inferior colliculi in the ferret.
Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K
2015-05-01
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M Barry; Lee, Brian; Stinn, John; Worthen, Katherine
2010-01-01
Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance.
NASA Astrophysics Data System (ADS)
Glickman, Randolph D.; Harrison, Joseph M.; Zwick, Harry; Longbotham, Harold G.; Ballentine, Charles S.; Pierce, Bennie
1996-04-01
Although visual function following retinal laser injuries has traditionally been assessed by measuring visual acuity, this measure only indicates the highest spatial frequency resolvable under high-contrast viewing conditions. Another visual psychophysical parameter is contrast sensitivity (CS), which measures the minimum contrast required for detection of targets over a range of spatial frequencies, and may evaluate visual mechanisms that do not directly subserve acuity. We used the visual evoked potential (VEP) to measure CS in a population of normal subjects and in patients with ophthalmic conditions affecting retinal function, including one patient with a laser injury in the macula. In this patient, the acuity had recovered from
Visual response time to colored stimuli in peripheral retina - Evidence for binocular summation
NASA Technical Reports Server (NTRS)
Haines, R. F.
1977-01-01
Simple onset response time (RT) experiments, previously shown to exhibit binocular summation effects for white stimuli along the horizontal meridian, were performed for red and green stimuli along 5 oblique meridians. Binocular RT was significantly shorter than monocular RT for a 45-min-diameter spot of red, green, or white light within eccentricities of about 50 deg from the fovea. Relatively large meridian differences were noted that appear to be due to the degree to which the images fall on corresponding retinal areas.
Combining MRI and VEP imaging to isolate the temporal response of visual cortical areas
NASA Astrophysics Data System (ADS)
Carney, Thom; Ales, Justin; Klein, Stanley A.
2008-02-01
The human brain has well over 30 cortical areas devoted to visual processing. Classical neuro-anatomical as well as fMRI studies have demonstrated that early visual areas have a retinotopic organization whereby adjacent locations in visual space are represented in adjacent areas of cortex within a visual area. At the 2006 Electronic Imaging meeting we presented a method using sprite graphics to obtain high resolution retinotopic visual evoked potential responses using multi-focal m-sequence technology (mfVEP). We have used this method to record mfVEPs from up to 192 non overlapping checkerboard stimulus patches scaled such that each patch activates about 12 mm2 of cortex in area V1 and even less in V2. This dense coverage enables us to incorporate cortical folding constraints, given by anatomical MRI and fMRI results from the same subject, to isolate the V1 and V2 temporal responses. Moreover, the method offers a simple means of validating the accuracy of the extracted V1 and V2 time functions by comparing the results between left and right hemispheres that have unique folding patterns and are processed independently. Previous VEP studies have been contradictory as to which area responds first to visual stimuli. This new method accurately separates the signals from the two areas and demonstrates that both respond with essentially the same latency. A new method is introduced which describes better ways to isolate cortical areas using an empirically determined forward model. The method includes a novel steady state mfVEP and complex SVD techniques. In addition, this evolving technology is put to use examining how stimulus attributes differentially impact the response in different cortical areas, in particular how fast nonlinear contrast processing occurs. This question is examined using both state triggered kernel estimation (STKE) and m-sequence "conditioned kernels". The analysis indicates different contrast gain control processes in areas V1 and V2. Finally we show that our m-sequence multi-focal stimuli have advantages for integrating EEG and MEG for improved dipole localization.
Synaptic physiology of the flow of information in the cat's visual cortex in vivo
Hirsch, Judith A; Martinez, Luis M; Alonso, José-Manuel; Desai, Komal; Pillai, Cinthi; Pierre, Carhine
2002-01-01
Each stage of the striate cortical circuit extracts novel information about the visual environment. We asked if this analytic process reflected laminar variations in synaptic physiology by making whole-cell recording with dye-filled electrodes from the cat's visual cortex and thalamus; the stimuli were flashed spots. Thalamic afferents terminate in layer 4, which contains two types of cell, simple and complex, distinguished by the spatial structure of the receptive field. Previously, we had found that the postsynaptic and spike responses of simple cells reliably followed the time course of flash-evoked thalamic activity. Here we report that complex cells in layer 4 (or cells intermediate between simple and complex) similarly reprised thalamic activity (response/trial, 99 ± 1.9 %; response duration 159 ± 57 ms; latency 25 ± 4 ms; average ± standard deviation; n = 7). Thus, all cells in layer 4 share a common synaptic physiology that allows secure integration of thalamic input. By contrast, at the second cortical stage (layer 2+3), where layer 4 directs its output, postsynaptic responses did not track simple patterns of antecedent activity. Typical responses to the static stimulus were intermittent and brief (response/trial, 31 ± 40 %; response duration 72 ± 60 ms, latency 39 ± 7 ms; n = 11). Only richer stimuli like those including motion evoked reliable responses. All told, the second level of cortical processing differs markedly from the first. At that later stage, ascending information seems strongly gated by connections between cortical neurons. Inputs must be combined in newly specified patterns to influence intracortical stages of processing. PMID:11927691
Effects of directional uncertainty on visually-guided joystick pointing.
Berryhill, Marian; Kveraga, Kestutis; Hughes, Howard C
2005-02-01
Reaction times generally follow the predictions of Hick's law as stimulus-response uncertainty increases, although notable exceptions include the oculomotor system. Saccadic and smooth pursuit eye movement reaction times are independent of stimulus-response uncertainty. Previous research showed that joystick pointing to targets, a motor analog of saccadic eye movements, is only modestly affected by increased stimulus-response uncertainty; however, a no-uncertainty condition (simple reaction time to 1 possible target) was not included. Here, we re-evaluate manual joystick pointing including a no-uncertainty condition. Analysis indicated simple joystick pointing reaction times were significantly faster than choice reaction times. Choice reaction times (2, 4, or 8 possible target locations) only slightly increased as the number of possible targets increased. These data suggest that, as with joystick tracking (a motor analog of smooth pursuit eye movements), joystick pointing is more closely approximated by a simple/choice step function than the log function predicted by Hick's law.
Effects of maternal inhalation of gasoline evaporative ...
In order to assess potential health effects resulting from exposure to ethanol-gasoline blend vapors, we previously conducted neurophysiological assessment of sensory function following gestational exposure to 100% ethanol vapor (Herr et al., Toxicologist, 2012). For comparison purposes, the current study investigated the same measures after gestational exposure to 100% gasoline evaporative condensates (GVC). Pregnant Long-Evans rats were exposed to 0, 3K, 6K, or 9K ppm GVC vapors for 6.5 h/day over GD9 – GD20. Sensory evaluations of male offspring began around PND106. Peripheral nerve function (compound action potentials, NCV), somatosensory (cortical and cerebellar evoked potentials), auditory (brainstem auditory evoked responses), and visual evoked responses were assessed. Visual function assessment included pattern elicited visual evoked potentials (VEP), VEP contrast sensitivity, and electroretinograms (ERG) recorded from dark-adapted (scotopic) and light-adapted (photopic) flashes, and UV and green flicker. Although some minor statistical differences were indicated for auditory and somatosensory responses, these changes were not consistently dose- or stimulus intensity-related. Scotopic ERGs had a statistically significant dose-related decrease in the b-wave implicit time. All other parameters of ERGs and VEPs were unaffected by treatment. All physiological responses showed changes related to stimulus intensity, and provided an estimate of detectable le
Kärtner, Joscha; Keller, Heidi; Yovsi, Relindis D
2010-01-01
This study analyzed German and Nso mothers' auditory, proximal, and visual contingent responses to their infants' nondistress vocalizations in postnatal Weeks 4, 6, 8, 10, and 12. Visual contingency scores increased whereas proximal contingency scores decreased over time for the independent (German urban middle-class, N = 20) but not the interdependent sociocultural context (rural Nso farmers, N = 24). It seems, therefore, that culture-specific differences in the modal patterns of contingent responsiveness emerge during the 2nd and 3rd months of life. This differential development was interpreted as the result of the interplay between maturational processes associated with the 2-month shift that are selectively integrated and reinforced in culture-specific mother-infant interaction.
The influence of time on task on mind wandering and visual working memory.
Krimsky, Marissa; Forster, Daniel E; Llabre, Maria M; Jha, Amishi P
2017-12-01
Working memory relies on executive resources for successful task performance, with higher demands necessitating greater resource engagement. In addition to mnemonic demands, prior studies suggest that internal sources of distraction, such as mind wandering (i.e., having off-task thoughts) and greater time on task, may tax executive resources. Herein, the consequences of mnemonic demand, mind wandering, and time on task were investigated during a visual working memory task. Participants (N=143) completed a delayed-recognition visual working memory task, with mnemonic load for visual objects manipulated across trials (1 item=low load; 2 items=high load) and subjective mind wandering assessed intermittently throughout the experiment using a self-report Likert-type scale (1=on-task, 6=off-task). Task performance (correct/incorrect response) and self-reported mind wandering data were evaluated by hierarchical linear modeling to track trial-by-trial fluctuations. Performance declined with greater time on task, and the rate of decline was steeper for high vs low load trials. Self-reported mind wandering increased over time, and significantly varied asa function of both load and time on task. Participants reported greater mind wandering at the beginning of the experiment for low vs. high load trials; however, with greater time on task, more mind wandering was reported during high vs. low load trials. These results suggest that the availability of executive resources in support of working memory maintenance processes fluctuates in a demand-sensitive manner with time on task, and may be commandeered by mind wandering. Copyright © 2017 Elsevier B.V. All rights reserved.
Bode, Stefan; Bennett, Daniel; Sewell, David K; Paton, Bryan; Egan, Gary F; Smith, Philip L; Murawski, Carsten
2018-03-01
According to sequential sampling models, perceptual decision-making is based on accumulation of noisy evidence towards a decision threshold. The speed with which a decision is reached is determined by both the quality of incoming sensory information and random trial-by-trial variability in the encoded stimulus representations. To investigate those decision dynamics at the neural level, participants made perceptual decisions while functional magnetic resonance imaging (fMRI) was conducted. On each trial, participants judged whether an image presented under conditions of high, medium, or low visual noise showed a piano or a chair. Higher stimulus quality (lower visual noise) was associated with increased activation in bilateral medial occipito-temporal cortex and ventral striatum. Lower stimulus quality was related to stronger activation in posterior parietal cortex (PPC) and dorsolateral prefrontal cortex (DLPFC). When stimulus quality was fixed, faster response times were associated with a positive parametric modulation of activation in medial prefrontal and orbitofrontal cortex, while slower response times were again related to more activation in PPC, DLPFC and insula. Our results suggest that distinct neural networks were sensitive to the quality of stimulus information, and to trial-to-trial variability in the encoded stimulus representations, but that reaching a decision was a consequence of their joint activity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wetlands for Wastewater: a Visual Approach to Microbial Dynamics
NASA Astrophysics Data System (ADS)
Joubert, L.; Wolfaardt, G.; Du Plessis, K.
2007-12-01
The complex character of distillery wastewater comprises high concentrations of sugars, lignins, hemicelluloses, dextrans, resins, polyphenols and organic acids which are recalcitrant to biodegradation. Microorganisms play a key role in the production and degradation of organic matter, environmental pollutants, and cycling of nutrients and metals. Due to their short life cycles microbes respond rapidly to external nutrient loading, with major consequences for the stability of biological systems. We evaluated the feasibility of wetlands to treat winery and distillery effluents in experimental systems based on constructed wetlands, including down-scaled on-site distillery wetlands, small-scale controlled greenhouse systems, and bench-scale mesocosms. Chemical, visual and molecular fingerprinting (t-RFLP) techniques were applied to study the dynamics of planktonic and attached (biofilm) communities at various points in wetlands of different size, retention time and geological substrate, and under influence of shock nutrient loadings. Variable- Pressure Scanning Electron Microscopy (VP-SEM) was applied to visualize microbial colonization, morphotype diversity and distribution, and 3D biofilm architecture. Cross-taxon and predator-prey interactions were markedly influenced by organic loading, while the presence of algae affected microbial community composition and biofilm structure. COD removal varied with geological substrate, and was positively correlated with retention time in gravel wetlands. Planktonic and biofilm communities varied markedly in different regions of the wetland and over time, as indicated by whole-community t-RFLP and VP-SEM. An integrative visual approach to community dynamics enhanced data retrieval not afforded by molecular techniques alone. The high microbial diversity along spatial and temporal gradients, and responsiveness to the physico-chemical environment, suggest that microbial communities maintain metabolic function by modifying species composition in response to fluctuations in their environment. It seems apparent that microbial community plasticity may indeed be the distinguishing characteristic of a successful wetland system.
Escobar, Gina M.; Maffei, Arianna; Miller, Paul
2014-01-01
The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition. PMID:24598528
Up-conversion of MMW radiation to visual band using glow discharge detector and silicon detector
NASA Astrophysics Data System (ADS)
Aharon Akram, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2016-10-01
In this work we describe and demonstrate a method for up-conversion of millimeter wave (MMW) radiation to the visual band using a very inexpensive miniature Glow Discharge Detector (GDD), and a silicon detector (photodetector). Here we present 100 GHz up-conversion images based on measuring the visual light emitting from the GDD rather than its electrical current. The results showed better response time of 480 ns and better sensitivity compared to the electronic detection that was performed in our previous work. In this work we performed MMW imaging based on this method using a GDD lamp, and a photodetector to measure GDD light emission.
Visual and auditory accessory stimulus offset and the Simon effect.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-10-01
We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.
Choe, Kyoung Whan; Blake, Randolph
2014-01-01
Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. PMID:24523561
Discrete-Slots Models of Visual Working-Memory Response Times
Donkin, Christopher; Nosofsky, Robert M.; Gold, Jason M.; Shiffrin, Richard M.
2014-01-01
Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small. PMID:24015956
Longitudinal decrease in blood oxygenation level dependent response in cerebral amyloid angiopathy.
Switzer, Aaron R; McCreary, Cheryl; Batool, Saima; Stafford, Randall B; Frayne, Richard; Goodyear, Bradley G; Smith, Eric E
2016-01-01
Lower blood oxygenation level dependent (BOLD) signal changes in response to a visual stimulus in functional magnetic resonance imaging (fMRI) have been observed in cross-sectional studies of cerebral amyloid angiopathy (CAA), and are presumed to reflect impaired vascular reactivity. We used fMRI to detect a longitudinal change in BOLD responses to a visual stimulus in CAA, and to determine any correlations between these changes and other established biomarkers of CAA progression. Data were acquired from 22 patients diagnosed with probable CAA (using the Boston Criteria) and 16 healthy controls at baseline and one year. BOLD data were generated from the 200 most active voxels of the primary visual cortex during the fMRI visual stimulus (passively viewing an alternating checkerboard pattern). In general, BOLD amplitudes were lower at one year compared to baseline in patients with CAA (p = 0.01) but were unchanged in controls (p = 0.18). The longitudinal difference in BOLD amplitudes was significantly lower in CAA compared to controls (p < 0.001). White matter hyperintensity (WMH) volumes and number of cerebral microbleeds, both presumed to reflect CAA-mediated vascular injury, increased over time in CAA (p = 0.007 and p = 0.001, respectively). Longitudinal increases in WMH (rs = 0.04, p = 0.86) or cerebral microbleeds (rs = -0.18, p = 0.45) were not associated with the longitudinal decrease in BOLD amplitudes.
Hummingbirds control hovering flight by stabilizing visual motion.
Goller, Benjamin; Altshuler, Douglas L
2014-12-23
Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.
McClure, J T; Browning, R T; Vantrease, C M; Bittle, S T
1994-01-01
Previous research suggests that traumatic brain injury (TBI) results in impairment of iconic memory abilities.We would like to acknowledge the contribution of Jeffrey D. Vantrease, who wrote the software program for the Iconic Memory procedure and measurement. This raises serious implications for brain injury rehabilitation. Most cognitive rehabilitation programs do not include iconic memory training. Instead it is common for cognitive rehabilitation programs to focus on attention and concentration skills, memory skills, and visual scanning skills.This study compared the iconic memory skills of brain-injury survivors and control subjects who all reached criterion levels of visual scanning skills. This involved previous training for the brain-injury survivors using popular visual scanning programs that allowed them to visually scan with response time and accuracy within normal limits. Control subjects required only minimal training to reach normal limits criteria. This comparison allows for the dissociation of visual scanning skills and iconic memory skills.The results are discussed in terms of their implications for cognitive rehabilitation and the relationship between visual scanning training and iconic memory skills.
How visual timing and form information affect speech and non-speech processing.
Kim, Jeesun; Davis, Chris
2014-10-01
Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.
Einstein, Michael C; Polack, Pierre-Olivier; Tran, Duy T; Golshani, Peyman
2017-05-17
Low-frequency membrane potential ( V m ) oscillations were once thought to only occur in sleeping and anesthetized states. Recently, low-frequency V m oscillations have been described in inactive awake animals, but it is unclear whether they shape sensory processing in neurons and whether they occur during active awake behavioral states. To answer these questions, we performed two-photon guided whole-cell V m recordings from primary visual cortex layer 2/3 excitatory and inhibitory neurons in awake mice during passive visual stimulation and performance of visual and auditory discrimination tasks. We recorded stereotyped 3-5 Hz V m oscillations where the V m baseline hyperpolarized as the V m underwent high amplitude rhythmic fluctuations lasting 1-2 s in duration. When 3-5 Hz V m oscillations coincided with visual cues, excitatory neuron responses to preferred cues were significantly reduced. Despite this disruption to sensory processing, visual cues were critical for evoking 3-5 Hz V m oscillations when animals performed discrimination tasks and passively viewed drifting grating stimuli. Using pupillometry and animal locomotive speed as indicators of arousal, we found that 3-5 Hz oscillations were not restricted to unaroused states and that they occurred equally in aroused and unaroused states. Therefore, low-frequency V m oscillations play a role in shaping sensory processing in visual cortical neurons, even during active wakefulness and decision making. SIGNIFICANCE STATEMENT A neuron's membrane potential ( V m ) strongly shapes how information is processed in sensory cortices of awake animals. Yet, very little is known about how low-frequency V m oscillations influence sensory processing and whether they occur in aroused awake animals. By performing two-photon guided whole-cell recordings from layer 2/3 excitatory and inhibitory neurons in the visual cortex of awake behaving animals, we found visually evoked stereotyped 3-5 Hz V m oscillations that disrupt excitatory responsiveness to visual stimuli. Moreover, these oscillations occurred when animals were in high and low arousal states as measured by animal speed and pupillometry. These findings show, for the first time, that low-frequency V m oscillations can significantly modulate sensory signal processing, even in awake active animals. Copyright © 2017 the authors 0270-6474/17/375084-15$15.00/0.
Real-time visualization of immune cell clearance of Aspergillus fumigatus spores and hyphae.
Knox, Benjamin P; Huttenlocher, Anna; Keller, Nancy P
2017-08-01
Invasive aspergillosis (IA) is a disease of the immunocompromised host and generally caused by the opportunistic fungal pathogen Aspergillus fumigatus. While both host and fungal factors contribute to disease severity and outcome, there are fundamental features of IA development including fungal morphological transition from infectious conidia to tissue-penetrating hyphae as well as host defenses rooted in mechanisms of innate phagocyte function. Here we address recent advances in the field and use real-time in vivo imaging in the larval zebrafish to visually highlight conserved vertebrate innate immune behaviors including macrophage phagocytosis of conidia and neutrophil responses post-germination. Copyright © 2017 Elsevier Inc. All rights reserved.
US forests are showing increased rates of decline in response to a changing climate
Warren B. Cohen; Zhiqiang Yang; David M. Bell; Stephen V. Stehman
2015-01-01
How vulnerable are US forest to a changing climate? We answer this question using Landsat time series data and a unique interpretation approach, TimeSync, a plot-based Landsat visualization and data collection tool. Original analyses were based on a stratified two-stage cluster sample design that included interpretation of 3858 forested plots. From these data, we...
Searching for biomarkers of CDKL5 disorder: early-onset visual impairment in CDKL5 mutant mice.
Mazziotti, Raffaele; Lupori, Leonardo; Sagona, Giulia; Gennaro, Mariangela; Della Sala, Grazia; Putignano, Elena; Pizzorusso, Tommaso
2017-06-15
CDKL5 disorder is a neurodevelopmental disorder still without a cure. Murine models of CDKL5 disorder have been recently generated raising the possibility of preclinical testing of treatments. However, unbiased, quantitative biomarkers of high translational value to monitor brain function are still missing. Moreover, the analysis of treatment is hindered by the challenge of repeatedly and non-invasively testing neuronal function. We analyzed the development of visual responses in a mouse model of CDKL5 disorder to introduce visually evoked responses as a quantitative method to assess cortical circuit function. Cortical visual responses were assessed in CDKL5 null male mice, heterozygous females, and their respective control wild-type littermates by repeated transcranial optical imaging from P27 until P32. No difference between wild-type and mutant mice was present at P25-P26 whereas defective responses appeared from P27-P28 both in heterozygous and homozygous CDKL5 mutant mice. These results were confirmed by visually evoked potentials (VEPs) recorded from the visual cortex of a different cohort. The previously imaged mice were also analyzed at P60-80 using VEPs, revealing a persistent reduction of response amplitude, reduced visual acuity and defective contrast function. The level of adult impairment was significantly correlated with the reduction in visual responses observed during development. Support vector machine showed that multi-dimensional visual assessment can be used to automatically classify mutant and wt mice with high reliability. Thus, monitoring visual responses represents a promising biomarker for preclinical and clinical studies on CDKL5 disorder. © The Author 2017. Published by Oxford University Press.
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films
Agar, Joshua C.; Cao, Ye; Naul, Brett; ...
2018-05-28
Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less
Brain State Effects on Layer 4 of the Awake Visual Cortex
Zhuang, Jun; Bereshpolova, Yulia; Stoelzel, Carl R.; Huff, Joseph M.; Hei, Xiaojuan; Alonso, Jose-Manuel
2014-01-01
Awake mammals can switch between alert and nonalert brain states hundreds of times per day. Here, we study the effects of alertness on two cell classes in layer 4 of primary visual cortex of awake rabbits: presumptive excitatory “simple” cells and presumptive fast-spike inhibitory neurons (suspected inhibitory interneurons). We show that in both cell classes, alertness increases the strength and greatly enhances the reliability of visual responses. In simple cells, alertness also increases the temporal frequency bandwidth, but preserves contrast sensitivity, orientation tuning, and selectivity for direction and spatial frequency. Finally, alertness selectively suppresses the simple cell responses to high-contrast stimuli and stimuli moving orthogonal to the preferred direction, effectively enhancing mid-contrast borders. Using a population coding model, we show that these effects of alertness in simple cells—enhanced reliability, higher gain, and increased suppression in orthogonal orientation—could play a major role at increasing the speed of cortical feature detection. PMID:24623767
Machine Detection of Enhanced Electromechanical Energy Conversion in PbZr 0.2Ti 0.8O 3 Thin Films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agar, Joshua C.; Cao, Ye; Naul, Brett
Many energy conversion, sensing, and microelectronic applications based on ferroic materials are determined by the domain structure evolution under applied stimuli. New hyperspectral, multidimensional spectroscopic techniques now probe dynamic responses at relevant length and time scales to provide an understanding of how these nanoscale domain structures impact macroscopic properties. Such approaches, however, remain limited in use because of the difficulties that exist in extracting and visualizing scientific insights from these complex datasets. Using multidimensional band-excitation scanning probe spectroscopy and adapting tools from both computer vision and machine learning, an automated workflow is developed to featurize, detect, and classify signatures ofmore » ferroelectric/ferroelastic switching processes in complex ferroelectric domain structures. This approach enables the identification and nanoscale visualization of varied modes of response and a pathway to statistically meaningful quantification of the differences between those modes. Lastly, among other things, the importance of domain geometry is spatially visualized for enhancing nanoscale electromechanical energy conversion.« less
All-optical recording and stimulation of retinal neurons in vivo in retinal degeneration mice
Strazzeri, Jennifer M.; Williams, David R.; Merigan, William H.
2018-01-01
Here we demonstrate the application of a method that could accelerate the development of novel therapies by allowing direct and repeatable visualization of cellular function in the living eye, to study loss of vision in animal models of retinal disease, as well as evaluate the time course of retinal function following therapeutic intervention. We use high-resolution adaptive optics scanning light ophthalmoscopy to image fluorescence from the calcium sensor GCaMP6s. In mice with photoreceptor degeneration (rd10), we measured restored visual responses in ganglion cell layer neurons expressing the red-shifted channelrhodopsin ChrimsonR over a six-week period following significant loss of visual responses. Combining a fluorescent calcium sensor, a channelrhodopsin, and adaptive optics enables all-optical stimulation and recording of retinal neurons in the living eye. Because the retina is an accessible portal to the central nervous system, our method also provides a novel non-invasive method of dissecting neuronal processing in the brain. PMID:29596518
Partitioning neuronal variability
Goris, Robbe L.T.; Movshon, J. Anthony; Simoncelli, Eero P.
2014-01-01
Responses of sensory neurons differ across repeated measurements. This variability is usually treated as stochasticity arising within neurons or neural circuits. However, some portion of the variability arises from fluctuations in excitability due to factors that are not purely sensory, such as arousal, attention, and adaptation. To isolate these fluctuations, we developed a model in which spikes are generated by a Poisson process whose rate is the product of a drive that is sensory in origin, and a gain summarizing stimulus-independent modulatory influences on excitability. This model provides an accurate account of response distributions of visual neurons in macaque LGN, V1, V2, and MT, revealing that variability originates in large part from excitability fluctuations which are correlated over time and between neurons, and which increase in strength along the visual pathway. The model provides a parsimonious explanation for observed systematic dependencies of response variability and covariability on firing rate. PMID:24777419